How is it possible to create a new subscription using SipSorcery, if your TCP connection died within the expiry timer of your initial subscription?
Write now I'm having a expiry timer running, and when it elapse, I'm checking if the connection is established like this:
while (!tcpChannel.IsConnectionEstablished(myRemoteEndpoint))
{
//... using same from tag, but creating new call id saved as SIPRequest _request...
System.Threading.Thread.Sleep(1000 * 60);
tcpChannel.Send(myRemoteEndpoint, Encoding.UTF8.GetBytes(_request.ToString());
}
The idea was to wait 60 seconds, then try to send a new SUBSCRIBE to the server, check if the connection is established, if not run again after 60 senconds, until the connection is established.
But the .IsConnectionEstablished seems a little unreliable to this purpose... Its like the while loop blocking for something. I can see that my SUBSCRIBE request has been sended, but I ain't receiving any response on that request.
Any ideas are appriciated.
You shouldn't need to do the IsConnectionEstablished check. When you call tcpChannel.Send it will take care of establishing a new TCP connection to the required end point if one is not available.
As to why you are not receiving a response to your subsequent SUBSCRIBE requests if you are re-sending the same request repeatedly without updating the required headers such as CSeq, Via branchid, Call-ID & from tag then it's probably getting flagged as a duplicate request.
Also you may want to have a look SIPSorcery.SIP.App.SIPNotifierClient as it is designed to maintain a subscription with a SIP server.
Related
I am using wsHttpBinding with InstanceContextMode.PerCall. On the server side I need to detect if the client has properly received the response. If not I need to clear some stuff in the server side. For this I tried to use OperationContext.Current.InstanceContext.Closed and OperationContext.Current.InstanceContext.Faulted. OperationContext.Current.InstanceContext.Closed fires always but .Faulted event never fires no matter what I do (timeout the connection, close the client while still receiving the data, unplug the cable). Also OperationContext.Current.InstanceContext.State is always closed.
How can I detect that the client did not received the request ?
As an alternative solution I was thinking if I catch error on the client side, to call a cleanup method on the server side, but this may complicate things because the internet connection may be down for a while.
Channel events on the service side are fired when the client calls Close() successfully, you will not get channel events fired on the server side because of client faults and timeout until the moment the server replies, which is very stupid.
To solve your problem you have three solutions:
1- if the client is yours, use duplex to call a dummy method on the client then the channel will fire its events.
2- use reliable session (sessionfull binding + reliable session turned on) so you can set the "InactivityTimeout" which will fire when infra messages like KeepAlive messages are not coming anymore. I guess InactivityTimeout default is 10 minutes, while Windows KeepAlive frequency is 1 second, so you can set InactivityTimeout to 5 seconds for example.
3- use IIS hosting with one of the http bindings, so inside your operation create a loop that keeps checking "ClientDisconnectedToken.IsCancellationRequested" in "HttpContext.Current.Response" in "System.Web" namespace.
I am trying to control the maximum total duration of a single connection in HttpListener. I am aware of the TimeoutManager property and the 5 or so different timeout values that it contains but it is unclear whether or not setting each of those values will add up to the total places where delay may occur in a connection.
I am looking for something more along the lines of: "If we have a connection that lasts more than x s from the moment of opening the connection until now, abort it without sending anything else or waiting for anything else."
EDIT
To clarify, the scenario that I was experimenting with involves the server trying to send the response and the client not receiving. This causes HttpListenerResponse.OutputStream.Write() to hang indefinitely. I was trying to find a method that I can call from another thread to hard-abort the connection. I tried using OutputStream.Close() and got Cannot Close Stream until all bytes are written. I also tried HttpListenerResponse.Abort() which produced no visible effect.
None of those properties will do what you want. HttpListener is intended to control the request flow, incomming and outgoing data, so it doesn't handle the time between when the response has been fully received and when you send a response, it's your responsability to take care of it.
You should create your own mechanism to abort the request if the total time is higer than the desired one, just a timer can be enough, when a new connection is created enqueue a timer with the total timeout as expiring time, if the request ends before the timer expires cancel the timer, else the timer aborts the request.
In short
How to prevent a duplex callback channel to be closed after an idle period?
In detail
I have a mostly working duplex WCF setup over NetTcpBinding i.e. the client can talk to the server and the server can call back to the client.
Furthermore, I have a reliable session such that the client does not lose the connection to the server after the default period of inactivity, achieved with the following configuration on both client and server:
var binding = new NetTcpBinding(SecurityMode.None);
// Need to prevent channel being closed after inactivity
// i.e. need to prevent the exception: This channel can no longer be used to send messages as the output session was auto-closed due to a server-initiated shutdown. Either disable auto-close by setting the DispatchRuntime.AutomaticInputSessionShutdown to false, or consider modifying the shutdown protocol with the remote server.
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.ReliableSession.Enabled = true;
binding.ReliableSession.InactivityTimeout = TimeSpan.MaxValue;
However, after a period of inactivity of less than half an hour (haven't measured the minimum time exactly), the server is unable to use the callback again - the server just blocks for a minute or so and I do not see any exceptions, while nothing happens on the client side (no evidence of callback).
Leads and root causes?
Note that I can use the callback fine twice in a row consecutively, as long as I do not wait long in between the callback calls.
Are the callbacks configured somewhere else? Do callbacks have their own timeouts etc?
Might it be a blocking/threading issue - need to either set UseSynchronizationContext=false on your client, or avoid blocking while waiting for the message to be received
Should DispatchRuntime.AutomaticInputSessionShutdown be set to false, and if so, how? I'm not really sure how it relates to reliable sessions and I do not know where to access this property
Anything else?
I achieved this by extending the BaseClient class with an automatic keep alive message to be invoked on the target interface when no other calls are made.
I'm trying to make a stunnel clone in C# just for fun. The main loop goes something like this (ignore the catch-everything-and-do-nothing try-catches just for now)
ServicePointManager.ServerCertificateValidationCallback = Validator;
TcpListener a = new TcpListener (9999);
a.Start ();
while (true) {
Console.Error.WriteLine ("Spinning...");
try {
TcpClient remote = new TcpClient ("XXX.XX.XXX.XXX", 2376);
SslStream ssl = new SslStream(remote.GetStream(), false, new RemoteCertificateValidationCallback(Validator));
ssl.AuthenticateAsClient("mirai.ca");
TcpClient user = a.AcceptTcpClient ();
new Thread (new ThreadStart(() => {
Thread.CurrentThread.IsBackground = true;
try{
forward(user.GetStream(), ssl); //forward is a blocking function I wrote
}catch{}
})).Start ();
} catch {
Thread.Sleep (1000);
}
}
I found that if I do the remote SSL connection, as I did, before waiting for the user, then when the user connects the SSL is already set up (this is for tunneling HTTP so latency is pretty important). On the other hand, my server closes long-inactive connections, so if no new connection happens in, say, 5 minutes, everything locks up.
What is the best way?
Also, I observe my program generating as much as 200 threads, which of course means that context-switching overhead is pretty big and sometimes results in the whole thing just blocking for seconds, even with just one user tunneling through the program. My forward function goes, in a gist, like
new Thread(new ThreadStart(()=>in.CopyTo(out))).Start();
out.CopyTo(in);
of course with lots of error handling to prevent broken connections from holding up forever. This seems to stall a lot though. I can't figure how to use asynchronous methods like BeginRead which should help according to google.
For any kind of proxy server (including an stunnel clone), opening the backend connection after you accept the frontend connection is clearly much simpler to implement.
If you pre-open backend connections in anticipation of receiving frontend connections, you can certainly save an RTT (which is good for latency), but you have to deal with the issue you hinted at: the backend will close idle connections. At any time that you receive a frontend connections, you run the risk that the backend connection that you are about to associate with this frontend connection and which has been opened some time ago is too old to use and may be closed by the backend. You will have to manage a pool of currently open backend connections and periodically close and refresh them when they become idle for too long. There is even a race condition where if the backend decided the connection has been idle too long and decides to close it but the proxy server receives a new frontend connection at the same time, the frontend may decide to forward a request through the backend connection while the backend is closing this connection. That means that you must be able to know a priori how long backend connections can be idle for before the backend will close them (you must know what the timeout values that are configured on the backend are set to) so you can give them up just before the backend will decide they are too old.
So in summary: pre-opening backend connections will save an RTT versus opening them only on demand, but it is a lot of work, including subtle connection pool management that it quite tough to implement bug-free. Up to you to judge if the extra complexity is worth it.
By the way, concerning your comment about handling several hundred simultaneous connections, I recommend implementing such an I/O-bound program as a proxy server based around an event loop instead of based around threads. Basically, you use non-blocking sockets and process events in a single thread (e.g. "this socket has new data waiting to be forwarded to the other side") instead of spawning a thread for each connection (which can get expensive both in thread creation and context switches). In order to scale such an event-based model to multiple CPU cores, you can start a small number of parallel threads of processes (more or less one per CPU core) which each handle many hundreds (or thousands) of simultaneous connections.
I'd like to implement a WebService containing a method whose reply will be delayed for less than 1 second to about an hour (it depends if the data is already cached or neeeds to be fetched).
Basically my question is what would be the best way to implement this if you are only able to connect from the client to the WebService (no notification possible)?
AFAIK this will only be possible by using some kind of polling. But polling is bad and so I'd rather like to avoid using it. The other extreme could be to just let the connection stay open as long as the method isn't done. But i guess this could end up in slowing down the webserver and the network. I considerd to combine these two technics. Then the client would call the method and the server will return after at least 10 seconds either with the message that the client needs to poll again or the actual result.
What are your thoughts?
You probably want to have a look at comet
I would suggest a sort of intelligent polling, if possible:
On first request, return a token to represent the request. This is what gets presented in future requests, so it's easy to check whether or not that request has really completed.
On future requests, hold the connection open for a certain amount of time (e.g. a minute, possibly specified on the client) and return either the result or a result of "still no results; please try again at X " where X is the best guess you have about when the response will be completed.
Advantages:
You allow the client to use the "hold a connection open" model which is relatively expensive (in terms of connections) but allows the response to be served as soon as it's ready. Make sure you don't hold onto a thread each connection though! (And have some kind of time limit...)
By saying when the client should come back, you can implement a backoff policy - even if you don't know when it will be ready, you could have a "backoff for 1, 2, 4, 8, 16, 30, 30, 30, 30..." minutes policy. (You should potentially check that the client isn't ignoring this.) You don't end up with masses of wasted polls for long misses, but you still get quick results quickly.
I think that for something which could take an hour to respond a web service is not the best mechanism to use.
Why is polling bad? Surely if you adjust the frequency of the polling it won't be so bad. Prehaps double the time between polls with a max of about five minutes.
Some web services I've worked with return a "please try again in " xml message when they can't respond immediately. I realise that this is just a refinement of the polling technique, but if your server can determine at the time of the request what the likely delay is going to be, it could tell the client that and then forget about it, leaving the client to ask again once the polling interval has expired.
There are timeouts on IIS and client - side, which will prevent you from leaving the connection open.
This is also not practical, because resources/connections are blocked on the server.
Why do you want the user to wait for such a long running task? Let them look up the status of the operation somewhere.