HttpListener setting a total connection timeout - c#

I am trying to control the maximum total duration of a single connection in HttpListener. I am aware of the TimeoutManager property and the 5 or so different timeout values that it contains but it is unclear whether or not setting each of those values will add up to the total places where delay may occur in a connection.
I am looking for something more along the lines of: "If we have a connection that lasts more than x s from the moment of opening the connection until now, abort it without sending anything else or waiting for anything else."
EDIT
To clarify, the scenario that I was experimenting with involves the server trying to send the response and the client not receiving. This causes HttpListenerResponse.OutputStream.Write() to hang indefinitely. I was trying to find a method that I can call from another thread to hard-abort the connection. I tried using OutputStream.Close() and got Cannot Close Stream until all bytes are written. I also tried HttpListenerResponse.Abort() which produced no visible effect.

None of those properties will do what you want. HttpListener is intended to control the request flow, incomming and outgoing data, so it doesn't handle the time between when the response has been fully received and when you send a response, it's your responsability to take care of it.
You should create your own mechanism to abort the request if the total time is higer than the desired one, just a timer can be enough, when a new connection is created enqueue a timer with the total timeout as expiring time, if the request ends before the timer expires cancel the timer, else the timer aborts the request.

Related

Call a method after a certain period of time without blocking

I'm making a webserver application, and I have a Listener class which waits for connections and spawns an HTTPConnection, passing it the new Socket created, each time a connect request is made. The HTTPConnection class waits for data asynchronously (using Socket.BeginReceive).
I need the delayed execution for a timeout. If the client fails to send a full HTTP request after a certain amount of time, I want to close the connection. As soon as the HTTPConnection object is constructed, the waiting period should begin, then call a Timeout function if the client fails to send the request. Obviously, I can't have the constructor method paused for a few seconds, so the waiting needs to happen async. I also need to be able to cancel the task.
I could do new Thread(...) and all, but that's very poor design. Are there any other ways to schedule a method to be called later?
You could append all postponed events to some ordered data structure and have a background task checking at certain interval if there's a timeout event that have to be executed.
You could save these events in database also (if you have a lot of clients I imagine it could lead to high memory usage).
Also you background task could get all the expired events from the database and handle them at once.

Fail over subscribe with SipSorcery

How is it possible to create a new subscription using SipSorcery, if your TCP connection died within the expiry timer of your initial subscription?
Write now I'm having a expiry timer running, and when it elapse, I'm checking if the connection is established like this:
while (!tcpChannel.IsConnectionEstablished(myRemoteEndpoint))
{
//... using same from tag, but creating new call id saved as SIPRequest _request...
System.Threading.Thread.Sleep(1000 * 60);
tcpChannel.Send(myRemoteEndpoint, Encoding.UTF8.GetBytes(_request.ToString());
}
The idea was to wait 60 seconds, then try to send a new SUBSCRIBE to the server, check if the connection is established, if not run again after 60 senconds, until the connection is established.
But the .IsConnectionEstablished seems a little unreliable to this purpose... Its like the while loop blocking for something. I can see that my SUBSCRIBE request has been sended, but I ain't receiving any response on that request.
Any ideas are appriciated.
You shouldn't need to do the IsConnectionEstablished check. When you call tcpChannel.Send it will take care of establishing a new TCP connection to the required end point if one is not available.
As to why you are not receiving a response to your subsequent SUBSCRIBE requests if you are re-sending the same request repeatedly without updating the required headers such as CSeq, Via branchid, Call-ID & from tag then it's probably getting flagged as a duplicate request.
Also you may want to have a look SIPSorcery.SIP.App.SIPNotifierClient as it is designed to maintain a subscription with a SIP server.

Handling Timeouts in a Socket Server

I have an asynchronous socket server that contains a thread-safe collection of all connected clients. If there's no activity coming from a client for a set amount of time (i.e. timeout), the server application should disconnect the client. Can someone suggest the best way to efficiently track this timeout for each connected client and disconnect when the client times out? This socket server must be very high performance and at any given time, hundreds of client could be connected.
One solution is to have each client associated with a last activity timestamp and have a timer periodically poll the collection to see which connection has timed out based on that timestamp and disconnect it. However, this solution to me isn't very good because that would mean the timer thread has to lock the collection whenever it polls (preventing any other connections/disconnections) for the duration of this process of checking every connected client and disconnecting when timed out.
Any suggestions/ideas would be greatly appreciated. Thanks.
If this is a new project or if you're open to a major refactor of your project, have a look at Reactive Extensions. Rx has an elegant solution for timeouts in asynchronous calls:
var getBytes = Observable.FromAsyncPattern<byte[], int, int, int>(_readStream.BeginRead, _readStream.EndRead);
getBytes(buffer, 0, buffer.Length)
.Timeout(timeout);
Note, the code above is just intended to demonstrate how to timeout in Rx and it would be more complex in a real project.
Performance-wise, I couldn't say unless you profile your specific use case. But I have seen a talk where they used Rx in a complex data-driven platform (probably like your requirements) and they said that their software was able to make decisions within less than 30ms.
Code-wise, I find that using Rx makes my code look more elegant and less verbose.
Your solution is quite OK for many cases when the number of connected clients is small compared to the performance of the enumerating the collection of their contexts. If you allow some wiggle room in the timeout (i.e. it's OK to disconnect the client somewhere between 55 and 65 seconds of inactivity), I'd just run the purging routine every so often.
The other approach which worked great for me was using a queue of activity tokens. Lets use C#:
class token {
ClientContext theClient; // this is the client we've observed activity on
DateTime theTime; // this is the time of observed activity
};
class ClientContext {
// ... what you need to know about the client
DateTime lastActivity; // the time the last activity happened on this client
}
Every time an activity happens on a particular client, a token is generated and pushed into FIFO queue and lastActivity is updated in the ClientContext.
There is another thread which runs the following loop:
extracts the oldest token from the FIFO queue;
checks if theTime in this token matches the theClient.lastActivity;
if it does, shuts down the client;
looks at the next oldest token in the queue, calculates how much time is left till it needs to be shutdown;
Thread.Sleep(<this time>);
repeat
The price of this approach is a little, but constant, i.e. O(1) overhead. One can come up with faster best case solutions, but it seems to me that it's hard to come up with a one with faster worst case performance.
I think it is easier to control timeout form within the connection thread. So you may have something like this:
// accept socket and open stream
stream.ReadTimeout = 10000; // 10 seconds
while (true)
{
int bytes = stream.Read(buffer, 0, length)
// process the data
}
The stream.Read() call will either return data (which means client is alive) or throw IO exception (abnormal disconnect) or return 0 (client closed the socket).

NServiceBus Delayed Message Processing

I have an NServiceBus application for which a given message may not be processed due to some external event not having taken place. Because this other event is not an NSB event I can't implement sagas properly.
However, rather than just re-queuing the message (which would cause a loop until that external event has occurred), I'm wrapping the message in another message (DelayMessage) and queuing that instead. The DelayMessage is picked up by a different service and placed in a database until the retry interval expires. At which point, the delay service re-queues the message on the original queue so another attempt can be made.
However, this can happen more than once if that external event still hasn't taken place, and in the case where that even never happens, I want to limit the number of round trips the message takes. This means the DelayMessage has a MaxRetries property, but that is lost when the delay service queues the original message for the retry.
What other options am I missing? I'm happy to accept that there's a totally different solution to this problem.
Consider implementing a saga which stores that first message, holding on to it until the second message arrives. You might also want the saga to open a timeout as well so that your process won't wait indefinitely if that second message got lost or something.

WebService and Polling

I'd like to implement a WebService containing a method whose reply will be delayed for less than 1 second to about an hour (it depends if the data is already cached or neeeds to be fetched).
Basically my question is what would be the best way to implement this if you are only able to connect from the client to the WebService (no notification possible)?
AFAIK this will only be possible by using some kind of polling. But polling is bad and so I'd rather like to avoid using it. The other extreme could be to just let the connection stay open as long as the method isn't done. But i guess this could end up in slowing down the webserver and the network. I considerd to combine these two technics. Then the client would call the method and the server will return after at least 10 seconds either with the message that the client needs to poll again or the actual result.
What are your thoughts?
You probably want to have a look at comet
I would suggest a sort of intelligent polling, if possible:
On first request, return a token to represent the request. This is what gets presented in future requests, so it's easy to check whether or not that request has really completed.
On future requests, hold the connection open for a certain amount of time (e.g. a minute, possibly specified on the client) and return either the result or a result of "still no results; please try again at X " where X is the best guess you have about when the response will be completed.
Advantages:
You allow the client to use the "hold a connection open" model which is relatively expensive (in terms of connections) but allows the response to be served as soon as it's ready. Make sure you don't hold onto a thread each connection though! (And have some kind of time limit...)
By saying when the client should come back, you can implement a backoff policy - even if you don't know when it will be ready, you could have a "backoff for 1, 2, 4, 8, 16, 30, 30, 30, 30..." minutes policy. (You should potentially check that the client isn't ignoring this.) You don't end up with masses of wasted polls for long misses, but you still get quick results quickly.
I think that for something which could take an hour to respond a web service is not the best mechanism to use.
Why is polling bad? Surely if you adjust the frequency of the polling it won't be so bad. Prehaps double the time between polls with a max of about five minutes.
Some web services I've worked with return a "please try again in " xml message when they can't respond immediately. I realise that this is just a refinement of the polling technique, but if your server can determine at the time of the request what the likely delay is going to be, it could tell the client that and then forget about it, leaving the client to ask again once the polling interval has expired.
There are timeouts on IIS and client - side, which will prevent you from leaving the connection open.
This is also not practical, because resources/connections are blocked on the server.
Why do you want the user to wait for such a long running task? Let them look up the status of the operation somewhere.

Categories

Resources