Call a method after a certain period of time without blocking - c#

I'm making a webserver application, and I have a Listener class which waits for connections and spawns an HTTPConnection, passing it the new Socket created, each time a connect request is made. The HTTPConnection class waits for data asynchronously (using Socket.BeginReceive).
I need the delayed execution for a timeout. If the client fails to send a full HTTP request after a certain amount of time, I want to close the connection. As soon as the HTTPConnection object is constructed, the waiting period should begin, then call a Timeout function if the client fails to send the request. Obviously, I can't have the constructor method paused for a few seconds, so the waiting needs to happen async. I also need to be able to cancel the task.
I could do new Thread(...) and all, but that's very poor design. Are there any other ways to schedule a method to be called later?

You could append all postponed events to some ordered data structure and have a background task checking at certain interval if there's a timeout event that have to be executed.
You could save these events in database also (if you have a lot of clients I imagine it could lead to high memory usage).
Also you background task could get all the expired events from the database and handle them at once.

Related

HttpListener setting a total connection timeout

I am trying to control the maximum total duration of a single connection in HttpListener. I am aware of the TimeoutManager property and the 5 or so different timeout values that it contains but it is unclear whether or not setting each of those values will add up to the total places where delay may occur in a connection.
I am looking for something more along the lines of: "If we have a connection that lasts more than x s from the moment of opening the connection until now, abort it without sending anything else or waiting for anything else."
EDIT
To clarify, the scenario that I was experimenting with involves the server trying to send the response and the client not receiving. This causes HttpListenerResponse.OutputStream.Write() to hang indefinitely. I was trying to find a method that I can call from another thread to hard-abort the connection. I tried using OutputStream.Close() and got Cannot Close Stream until all bytes are written. I also tried HttpListenerResponse.Abort() which produced no visible effect.
None of those properties will do what you want. HttpListener is intended to control the request flow, incomming and outgoing data, so it doesn't handle the time between when the response has been fully received and when you send a response, it's your responsability to take care of it.
You should create your own mechanism to abort the request if the total time is higer than the desired one, just a timer can be enough, when a new connection is created enqueue a timer with the total timeout as expiring time, if the request ends before the timer expires cancel the timer, else the timer aborts the request.

What is a safe overhead for RequestAdditionalTime()?

I have a Windows service that spawns a set of child activities on separate threads and that should only terminate when all those activities have successfully completed. I do not know in advance how long it might take to terminate an activity after a stop signal is received. During OnStop(), I wait in intervals for that stop signal and keep requesting additional time for as long as the system is willing to grant it.
Here is the basic structure:
class MyService : ServiceBase
{
private CancellationTokenSource stopAllActivities;
private CountdownEvent runningActivities;
protected override void OnStart(string[] args)
{
// ... start a set of activities that signal runningActivities
// when they stop
// ... initialize runningActivities to the number of activities
}
protected override void OnStop()
{
stopAllActivities.Cancel();
while (!runningActivities.Wait(10000))
{
RequestAdditionalTime(15000); // NOTE: 5000 added for overhead
}
}
}
Just how much "overhead" should I be adding in the RequestAdditionalTime call? I'm concerned that the requests are cumulative, instead of based on the point in time when each RequestAdditionalTime call is made. If that's the case, adding overhead could result in the system eventually denying the request because it's too far out in the future. But if I don't add any overhead then my service could be terminated before it has a chance to request the next block of additional time.
This post wasn't exactly encouraging:
The MSDN documentation doesn’t mention this but it appears that the value specified in RequestAdditionalTime is not actually ‘additional’ time. Instead, it replaces the value in ServicesPipeTimeout. Worse still, any value greater than two minutes (120000 milliseconds) is ignored, i.e. capped at two minutes.
I hope that's not the case, but I'm posting this as a worst-case answer.
UPDATE: The author of that post was kind enough to post a very detailed reply to my comment, which I've copied below.
Lars, the short answer is no.
What I would say is that I now realise that Windows Services ought to be designed to start and terminate processing quickly when requested to do so.
As developers, we tend to focus on the implementation of the processing and then package it up and deliver it as a Windows Service.
However, this really isn’t the correct approach to designing Windows Services. Services must be able to respond quickly to requests to start and stop not only when an administrator making the request from the services console but also when the operating system is requesting a start as part of its start up processing or a stop because it is shutting down,
Consider what happens when Windows is configured to shut down when a UPS signals that the power has failed. It’s not appropriate for the service to respond with “I need a few more minutes…”.
It’s possible to write services that react quickly to stop requests even when they implement long running processing tasks. Usually a long running process will consist of batch processing of data and the processing should check if a stop has been requested at the level of the smallest unit of work that ensures data consistency.
As an example, the first service where I found the stop timeout was a problem involved the processing of a notifications queue on a remote server. The processing retrieved a notification from the queue, calling a web service to retrieve data related to the subject of the notification, and then writing a data file for processing by another application.
I implemented the processing as a timer driven call to a single method. Once the method is called it doesn’t return until all the notifications in the queue have been processed. I realised this was a mistake for a Windows Service because occasionally there might be tens of thousands of notifications in the queue and processing might take several minutes.
The method is capable of processing 50 notifications per second. So, what I should have done was implement a check to see if a stop had been requested before processing each notification. This would have allowed the method to return when it has completed the processing of a notification but before it has started to process the next notification. This would have ensured that the service responds quickly to a stop request and any pending notifications remained queued for processing when the service is restarted.

Asynchronous calls within a WCF Service

We have a situation where we need to execute some long running code in the InitializeService method of a Data Service. Currently the first call to the data service fires off the code, but does not receive a response until the long running code has finished. The client is not required to wait for this action to complete. I have attempted to use a new thread to execute the code, however with the code being run we are replacing some files on the server which seems to kill the thread and causes it to bomb out. If I don't have it in a thread it runs fine, but the InitializeService method takes a long time to complete.
Are there any other ways to run this code asynchronously (was thinking maybe there is a way to call another method in the same fashion that a client would)?
Thanks in advance.
All WCF communication is basically Asynchronous. Each call spins up its own thread on the host and the processing starts. The problem you're running into, like many of us, is that the client times out before the host is finished with the work, and there's no easy way around that beyond setting the timeout to some ridiculous amount of time.
It's better to split your processing up into two or more parts, starting the intialization process and finishing the initialization process in separate steps, like this:
One option you could try a duplexed WCF service with a call back function to the client. In other words, client "A" calls the host and starts the initialization routine, but the host immediately sends back the client a value of IntializationStart=True so that the client isn't left waiting for the timeout. Then, when the host has finished compiling the files, it calls the client (which has its own listener) and sends a messages that the initialization is ready. Then the client calls the host and downloads the processed files.
This will works well PC-to-server, or server-to-server.
Another option could work this way: client "A" contacts host and host starts the Initialization routine, again sending back IntializationStarted=True. The host sets an internal (DB) value of FilesReady=False for client "A" until all the files are finished. At that point, host sets its internal value of FilesReady=True. Meanwhile, the client is on a timer, polling the host every minute until it finally receives that FilesReady=True, then it downloads the waiting files.
If you're talking about an iPhone-to-server or Android-to-server, then this is a better route.
You follow?

How many simultaneous (concurrent) connections are actually active during a many async request

My understanding is the point of Task is to abstract out threads, and that a new thread is not guaranteed per Task.
I'm debugging in VS2010, and I have something similar to this:
var request = WebRequest.Create(URL);
Task.Factory.FromAsync<WebResponse>(
request.BeginGetResponse,
request.EndGetResponse).ContinueWith(
t => { /* ... Stuff to do with response ... */ });
If I make X calls to this, e.g. start up X async web requests, how am I to calculate how many simultaneous (concurrent) connections are actually being made at any given time during execution? I assume that somehow it is opening only the max it can (in the case X is very high), and the other Tasks are blocked while waiting?
Any insight into this or how I can check with the debugger to determine how many active (open) connections are existent at a given point in execution would be great.
Basically, I'm wondering if it's handled for me, or if I have to take special consideration so that I do not appear to be attacking a server?
This won't really be specific to Task. The external connection is created as soon as you make your call to Task.Factory.FromAsync. The "task" that the Task is performing is simply waiting for the response to get back (not for it to be sent in the first place). Thus the call to BeginGetResponse will fail if your machine is unable to send any more requests, and the response will contain an error message if the server is rejecting your requests due to their belief that you are flooding them.
The only real place that Task comes into play here is the amount of time between when the response is actually received by the machine and when your continuation runs. If you are getting lots of responses, or otherwise have lots of work in the thread pool, it could take some time for it to get to your continuation.

How to implement an IIS-like threadpool on a worker-server

EDIT I realised my question was not stated clearly enough and have edited it heavily.
This is a bit of an open ended question so apologies in advance.
In a nutshell, I want to implement IIS-style asynchronous request processing in an Azure worker role.
It may be very simple or it may be insanely hard - I am looking for pointers to where to research.
While my implementation will use Azure Workers and Service Bus Queues, the general principle is applicable to any scenario where a worker process is listening for incoming requests and then servicing them.
What IIS does
In IIS there is a fixed-size threadpool. If you deal with all request synchronously then the maximum number of requests you can deal with in parallel == maxthreads. However, if you have to do slow external I/O to serve requests then this is highly inefficient because you can end up with the server being idle, yet have all threads tied up waiting for external I/O to complete.
From MSDN:
On the Web server, the .NET Framework maintains a pool of threads that are used to service ASP.NET requests. When a request arrives, a thread from the pool is dispatched to process that request. If the request is processed synchronously, the thread that processes the request is blocked while the request is being processed, and that thread cannot service another request.
This might not be a problem, because the thread pool can be made large enough to accommodate many blocked threads. However, the number of threads in the thread pool is limited. In large applications that process multiple simultaneous long-running requests, all available threads might be blocked. This condition is known as thread starvation. When this condition is reached, the Web server queues requests. If the request queue becomes full, the Web server rejects requests with an HTTP 503 status (Server Too Busy).
In order to overcome this issue, IIS has some clever logic that allows you to deal with requests asynchronously:
When an asynchronous action is invoked, the following steps occur:
The Web server gets a thread from the thread pool (the worker thread) and schedules it to handle an incoming request. This worker thread initiates an asynchronous operation.
The worker thread is returned to the thread pool to service another Web request.
When the asynchronous operation is complete, it notifies ASP.NET.
The Web server gets a worker thread from the thread pool (which might be a different thread from the thread that started the asynchronous operation) to process the remainder of the request, including rendering the response.
The important point here is when the asynchronous request returns, the return action is scheduled to run on one of the same pool of threads that serves the initial incoming requests. This means that the system is limiting how much work it is doing concurrently and this is what I would like to replicate.
What I want to do
I want to create a Worker role which will listen for incoming work requests on Azure Service Bus Queues and also potentially on TCP sockets. Like IIS I want to have a maxium threadpool size and I want to limit how much actual work the worker is doing in parallel; If the worker is busy serving existing requests - whether new incoming ones or the callbacks from previous async calls - I don't want to pick up any new incoming requests until some threads have been freed up.
It is not a problem to limit how many jobs I start concurrently - that is easy to control; It is limiting how many I am actually working on concurrently.
Let's assume a threadpool of 100 threads.
I get 100 requests to send an email come in and each email takes 5 seconds to send to the SMTP server. If I limit my server to only process 100 requests at the same time then my server will be unable to do anything else for 5 seconds, while the CPU is completely idle. So, I don't really mind starting to send 1,000 or 10,000 emails at the same time, because 99% of the "request process time" will be spent waiting for external I/O and my server will still be very quiet.
So, that particular scenario I could deal with by just keeping on accepting incoming requests with no limit (or only limit the start of the request until I fire off the async call; as soon as the BeginSend is called, I'll return and start serving another request).
Now, imagine instead that I have a type of request that goes to the database to read some data, does some heavy calculation on it and then writes that back to the database. There are two database requests there that should be made asynchronous but 90% of the request processing time will be spent on my worker. So, if I follow the same logic as above and keep start async calls and just letting the return do whatever it needs to get a thread to continue on then I will end up with a server that is very overloaded.
Somehow, what IIS does is make sure that when an async call returns it uses the same fixed-size thread pool. This means that if I fire off a lot of async calls and they then return and start using my threads, IIS will not accept new requests until those returns have finished. And that is perfect because it ensures a sensible load on the server, especially when I have multiple load-balanced servers and a queue system that the servers pick work from.
I have this sneaky suspicion that this might be very simple to do, there is just something basic I am missing. Or maybe it is insanely hard.
Creating a threadpool should be considered as independent of Windows Azure. Since a Worker Role instance is effectively Windows 2008 Server R2 (or SP2), there's nothing really different. You'd just need to set things up from your OnStart() or Run().
One thing you wanted to do was use queue length as a determining factor when scaling to more/less worker instances. Note that Service Bus Queues don't advertise queue length, where Windows Azure Queues (based on Storage, vs. Service Bus) do. With Windows Azure Queues, you'll need to poll synchronously for messages (whereas Service Bus Queues have long-polling operations). Probably a good idea to review the differences between Service Bus Queues and Windows Azure Queues, here.
Have you considered having a dedicated WCF instance (not WAS or IIS hosted) to buffer the long running requests? It will have its own dedicated app pool, with a separate Max value setting from IIS that won't contend with your ASP.NET HTTP requests. (HTTP requests are served by
Then use IIS Async methods to call WCF with the constrained app pool.
I've used the SmartThreadPool project in the past as a per-instance pool and, if I'm reading you correctly, it should have all the callback and worker-limiting functionality you need. My company actually has it running currently on Azure for the exact purpose you describe of reading message bus requests asynchronously.
I have been digging around in this and found that it is indeed relatively easy.
http://www.albahari.com/threading/ has got some good information and I actually ended up buying the book which that website is essentially promoting.
What I found out is that;
Your application has a ThreadPool available to it by default
You can limit the number of threads available in the ThreadPool
When you use QueueUserWorkItem or Task.Factory.StartNew the job you start run on a Thread in the ThreadPool
When you use one of the asynchronous IO calls in the framework (Begin... methods or WebcClient.DownloadStringAsync etc) the the callbacks will also run on a Thread from the ThreadPool (what happens with the IO request itself is outside the scope of this discussion).
So far, so good. The problem is that I can keep calling Task.Factory.StartNew as much as I like and the ThreadPool will simply queue up the work until there are free threads to service them. So, in the case of an Azure Worker, I could easily empty the Queue even though my worker is busy servicing existing requests (and callbacks from existing requests). That is the core of my problem. What I want is to not take anything out of the queue until I actually have some free threads to service the request.
This is a very simple example of how this could be achieved. In essence, I am using an AutoResetEvent to make sure that I don't start another task from the queue until the previous task has actually started. Granted, I do actually take stuff out of the queue before there is a free thread, but on balance this should avoid crazy overloads of the worker and allow me to spin up more workers to share the load.
ThreadPool.SetMaxThreads(5, 1000); // Limit to 5 concurrent threads
ThreadPool.SetMinThreads(5, 10); // Ensure we spin up all threads
var jobStart = new AutoResetEvent(true);
// The "listen" loop
while (true)
{
var job = this.jobQueue.Dequeue();
jobStart.WaitOne(); // Wait until the previous job has actually been started
Task.Factory.StartNew(
() =>
{
jobStart.Set(); // Will happen when the threadpool allocates this job to a thread
this.Download(job);
});
}
This can - and probably should - be made a lot more sophisticated, including having timeouts, putting the work item back in the queue if a thread can't be allocated within a reasonable time and so on.
An alternative would be to use ThreadPool.GetAvailableThreads to check if there are free threads before starting to listen to the queue but that feels rather more error prone.
Somehow, what IIS does is make sure that when an async call returns
it uses the same fixed-size thread pool.
This is not true: When your code runs in response to an HTTP-Request you decide on what threads the continuation function executes. Usually, this is the thread pool. And the thread pool is an appdomain-wide resource that is shared among all requests.
I think IIS does less "magic" than you think it does. All it does is to limit the number of parallel HTTP-requests and the backlog size. You decide what happens once you have been given control by ASP.NET.
If your code is not protected against overloading the server, you will overload the server even on IIS.
From what I understand you want to constrain the number of threads used for processing a certain type of message at the same time.
One approach would be to simply wrap the message processor, invoked on a new thread with something like
try
{
Interlocked.Increment(ref count)
Process(message);
}
finally
{
Interlocked.Decrement(ref count)
}
Before invoking the wrapper, simply check if the ‘count’ is less than your threshold count; and stop polling/handling more messages till the count is sufficiently lower.
EDIT Added more information based on comment
Frans, not sure why you see the infrastructure and business code being coupled. Once you place your business process to be serviced as a task on a new thread to run asynchronously, you need not worry about performing additional IO bound calls asynchronously. This is a simpler model to program in.
Here is what I am thinking.
// semi - pseudo-code
// Infrastructure – reads messages from the queue
// (independent thread, could be a triggered by a timer)
while(count < maxCount && (message = Queue.GetMessage()) != null)
{
Interlocked.Increment(ref count);
// process message asynchronously on a new thread
Task.Factory.StartNew(() => ProcessWrapper(message));
}
// glue / semi-infrastructure - deals with message deletion and exceptions
void ProcessWrapper(Message message)
{
try
{
Process(message);
Queue.DeleteMessage(message);
}
catch(Exception ex)
{
// Handle exception here.
// Log, write to poison message queue etc ...
}
finally
{
Interlocked.Decrement(ref count)
}
}
// business process
void Process(Message message)
{
// actual work done here
;
}

Categories

Resources