In short
How to prevent a duplex callback channel to be closed after an idle period?
In detail
I have a mostly working duplex WCF setup over NetTcpBinding i.e. the client can talk to the server and the server can call back to the client.
Furthermore, I have a reliable session such that the client does not lose the connection to the server after the default period of inactivity, achieved with the following configuration on both client and server:
var binding = new NetTcpBinding(SecurityMode.None);
// Need to prevent channel being closed after inactivity
// i.e. need to prevent the exception: This channel can no longer be used to send messages as the output session was auto-closed due to a server-initiated shutdown. Either disable auto-close by setting the DispatchRuntime.AutomaticInputSessionShutdown to false, or consider modifying the shutdown protocol with the remote server.
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.ReliableSession.Enabled = true;
binding.ReliableSession.InactivityTimeout = TimeSpan.MaxValue;
However, after a period of inactivity of less than half an hour (haven't measured the minimum time exactly), the server is unable to use the callback again - the server just blocks for a minute or so and I do not see any exceptions, while nothing happens on the client side (no evidence of callback).
Leads and root causes?
Note that I can use the callback fine twice in a row consecutively, as long as I do not wait long in between the callback calls.
Are the callbacks configured somewhere else? Do callbacks have their own timeouts etc?
Might it be a blocking/threading issue - need to either set UseSynchronizationContext=false on your client, or avoid blocking while waiting for the message to be received
Should DispatchRuntime.AutomaticInputSessionShutdown be set to false, and if so, how? I'm not really sure how it relates to reliable sessions and I do not know where to access this property
Anything else?
I achieved this by extending the BaseClient class with an automatic keep alive message to be invoked on the target interface when no other calls are made.
Related
I am using wsHttpBinding with InstanceContextMode.PerCall. On the server side I need to detect if the client has properly received the response. If not I need to clear some stuff in the server side. For this I tried to use OperationContext.Current.InstanceContext.Closed and OperationContext.Current.InstanceContext.Faulted. OperationContext.Current.InstanceContext.Closed fires always but .Faulted event never fires no matter what I do (timeout the connection, close the client while still receiving the data, unplug the cable). Also OperationContext.Current.InstanceContext.State is always closed.
How can I detect that the client did not received the request ?
As an alternative solution I was thinking if I catch error on the client side, to call a cleanup method on the server side, but this may complicate things because the internet connection may be down for a while.
Channel events on the service side are fired when the client calls Close() successfully, you will not get channel events fired on the server side because of client faults and timeout until the moment the server replies, which is very stupid.
To solve your problem you have three solutions:
1- if the client is yours, use duplex to call a dummy method on the client then the channel will fire its events.
2- use reliable session (sessionfull binding + reliable session turned on) so you can set the "InactivityTimeout" which will fire when infra messages like KeepAlive messages are not coming anymore. I guess InactivityTimeout default is 10 minutes, while Windows KeepAlive frequency is 1 second, so you can set InactivityTimeout to 5 seconds for example.
3- use IIS hosting with one of the http bindings, so inside your operation create a loop that keeps checking "ClientDisconnectedToken.IsCancellationRequested" in "HttpContext.Current.Response" in "System.Web" namespace.
I have an issue with WCF timing out. The strange thing is that my method is actually being called on the server, but the client call ton the object returned from CreateChannel() is timing out with an exception.
The entire error messsage:
This request operation sent to net.pipe://localhost/AndonServer did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.
I could just decrease the timeout setting to 5 seconds, say, but that's a bit dirty. Anyone have any ideas why this might be happening?
Mark
It means you elapsed the timeout period waiting for a reply from the server. By default, all calls in WCF have both a request and a reply, even void methods. The server needs to complete the call promptly so WCF will send a reply. Another is option is to use a one-way call if the client does not require a reply from the server.
I'm trying to make a stunnel clone in C# just for fun. The main loop goes something like this (ignore the catch-everything-and-do-nothing try-catches just for now)
ServicePointManager.ServerCertificateValidationCallback = Validator;
TcpListener a = new TcpListener (9999);
a.Start ();
while (true) {
Console.Error.WriteLine ("Spinning...");
try {
TcpClient remote = new TcpClient ("XXX.XX.XXX.XXX", 2376);
SslStream ssl = new SslStream(remote.GetStream(), false, new RemoteCertificateValidationCallback(Validator));
ssl.AuthenticateAsClient("mirai.ca");
TcpClient user = a.AcceptTcpClient ();
new Thread (new ThreadStart(() => {
Thread.CurrentThread.IsBackground = true;
try{
forward(user.GetStream(), ssl); //forward is a blocking function I wrote
}catch{}
})).Start ();
} catch {
Thread.Sleep (1000);
}
}
I found that if I do the remote SSL connection, as I did, before waiting for the user, then when the user connects the SSL is already set up (this is for tunneling HTTP so latency is pretty important). On the other hand, my server closes long-inactive connections, so if no new connection happens in, say, 5 minutes, everything locks up.
What is the best way?
Also, I observe my program generating as much as 200 threads, which of course means that context-switching overhead is pretty big and sometimes results in the whole thing just blocking for seconds, even with just one user tunneling through the program. My forward function goes, in a gist, like
new Thread(new ThreadStart(()=>in.CopyTo(out))).Start();
out.CopyTo(in);
of course with lots of error handling to prevent broken connections from holding up forever. This seems to stall a lot though. I can't figure how to use asynchronous methods like BeginRead which should help according to google.
For any kind of proxy server (including an stunnel clone), opening the backend connection after you accept the frontend connection is clearly much simpler to implement.
If you pre-open backend connections in anticipation of receiving frontend connections, you can certainly save an RTT (which is good for latency), but you have to deal with the issue you hinted at: the backend will close idle connections. At any time that you receive a frontend connections, you run the risk that the backend connection that you are about to associate with this frontend connection and which has been opened some time ago is too old to use and may be closed by the backend. You will have to manage a pool of currently open backend connections and periodically close and refresh them when they become idle for too long. There is even a race condition where if the backend decided the connection has been idle too long and decides to close it but the proxy server receives a new frontend connection at the same time, the frontend may decide to forward a request through the backend connection while the backend is closing this connection. That means that you must be able to know a priori how long backend connections can be idle for before the backend will close them (you must know what the timeout values that are configured on the backend are set to) so you can give them up just before the backend will decide they are too old.
So in summary: pre-opening backend connections will save an RTT versus opening them only on demand, but it is a lot of work, including subtle connection pool management that it quite tough to implement bug-free. Up to you to judge if the extra complexity is worth it.
By the way, concerning your comment about handling several hundred simultaneous connections, I recommend implementing such an I/O-bound program as a proxy server based around an event loop instead of based around threads. Basically, you use non-blocking sockets and process events in a single thread (e.g. "this socket has new data waiting to be forwarded to the other side") instead of spawning a thread for each connection (which can get expensive both in thread creation and context switches). In order to scale such an event-based model to multiple CPU cores, you can start a small number of parallel threads of processes (more or less one per CPU core) which each handle many hundreds (or thousands) of simultaneous connections.
In the project I'm working on, we have several services implemented using WCF. The situation I'm facing is that some of the services need to know when a session ends, so that it can appropriately update the status of that client. Notifying the service when a client gracefully terminates (e.g. the user closes the application) is easy, however, there are cases where the application might crash, or the client machine might restart, in which case the client won't be able to notify the service about its status.
Initially, I was thinking about having a timer on the server side, which is triggered once a client connects, and changes the status of that client to "terminated" after, let's say, 1 minute. Now the client sends its status every 30 seconds to the service, and the service basically restarts its timer on every request from the client, which means it (hopefully) never changes the status of the client as long as the client is alive.
Even though this method is pretty reliable (not fully reliable; what if it takes the client more than 1 minute to send its status?) it's still not the best approach to solving this problem. Note that due to the original design of the system, I cannot implement a duplex service, which would probably make things a lot simpler. So my question is: Is there a way for the sevice to know when the session ends (i.e. the connection times out or the client closes the proxy)? I came accross this question: WCF: How to find out when a session is ending but the link on the answer seems to be broken.
Another thing that I'm worried about is; they way I'm currently creating my channel proxies is implemented like this:
internal static TResult ExecuteAndReturn<TProxy, TResult>(Func<TProxy, TResult> delegateToExecute)
{
string endpointUri = ServiceEndpoints.GetServiceEndpoint(typeof(TProxy));
var binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.Message;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
TResult valueToReturn;
using (ChannelFactory<TProxy> factory = new ChannelFactory<TProxy>(binding,
new EndpointAddress(new Uri(endpointUri),
EndpointIdentity.CreateDnsIdentity(ServiceEndpoints.CertificateName))))
{
TProxy proxy = factory.CreateChannel();
valueToReturn = delegateToExecute(proxy);
}
return valueToReturn;
}
So the channel is closed immediately after the service call is made (since it's in a using block), is that, from a service standpoint, an indication that the session is terminated? If so, should I keep only one instance of each service during application runtime, by using a singleton maybe? I apologize if the questions seem a little vague, I figured there would be plenty of questions like these but wasn't able to find something similar.
Yes, closing the channel terminates the session, but if there is an error of some kind then you are subject to the timeout settings of the service, like this:
<binding name="tcpBinding" receiveTimeout="00:00:10" />
This introduces a ten second timeout if an error occurs.
Check out Managing WCF Session Lifetime with IsInitiating and IsTerminating
I have a duplex WCF service and client running on the same machine. The client is configured to have 15 second timeouts:
<binding name="NetTcpBinding_IServiceIPC" closeTimeout="00:00:15"
openTimeout="00:00:15" receiveTimeout="00:00:15" sendTimeout="00:00:15" />
The client is handling faults like this:
client.InnerChannel.Faulted += FaultHandler;
client.InnerDuplexChannel.Faulted += FaultHandler;
client.ChannelFactory.Faulted += FaultHandler;
If I kill my Service process, the client correctly gets a TimeoutException after 15 seconds:
This request operation sent to net.tcp://localhost:8732/Service/ did not receive a reply within the configured timeout (00:00:15). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client. (System.TimeoutException)
However, the channel is not faulted at this point. My fault handler doesn't end up getting called until about 5 minutes after I kill the Service process. I thought that a TimeoutException would fault the channel (see this answer), but somehow that doesn't appear to be the case. Is there any way I can force the channel to be faulted more quickly after the Service process is killed?
This question Duplex channel Faulted event does not rise on second connection attempt suggestions the Faulted event isn't always fired. And the WCF state flow diagram on MSDN confirms that possibility - http://msdn.microsoft.com/en-us/library/ms789041.aspx
There are many paths to the closed state that don't go through the faulted state. Most likely, when you time out, the Abort() method is being called and you transition from the open state to the closing state without going through the faulted state. Add some logging to check the state throughout execution. If you're trying to reopen the channel after timing out, that would explain why you end up in the faulted state 5 minutes later. To solve your bigger problems, move logic in the FaultedHandler elsewhere so it's executed when you reach the closed state through other paths.
I know the question is old. But I searched quite a lot and always ended up here. So I thought I'd post my findings here:
It depends which Timeout.
If you hit the SendTimeout or ReceiveTimeout of your binding (in my case NetTcpBinding), then yes, the channel will fault.
BUT, if you hit the OperationTimeout of your Service (in my case DuplexChannel) then you will just get a TimeoutException and the channel will NOT fault.