I have a duplex WCF service and client running on the same machine. The client is configured to have 15 second timeouts:
<binding name="NetTcpBinding_IServiceIPC" closeTimeout="00:00:15"
openTimeout="00:00:15" receiveTimeout="00:00:15" sendTimeout="00:00:15" />
The client is handling faults like this:
client.InnerChannel.Faulted += FaultHandler;
client.InnerDuplexChannel.Faulted += FaultHandler;
client.ChannelFactory.Faulted += FaultHandler;
If I kill my Service process, the client correctly gets a TimeoutException after 15 seconds:
This request operation sent to net.tcp://localhost:8732/Service/ did not receive a reply within the configured timeout (00:00:15). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client. (System.TimeoutException)
However, the channel is not faulted at this point. My fault handler doesn't end up getting called until about 5 minutes after I kill the Service process. I thought that a TimeoutException would fault the channel (see this answer), but somehow that doesn't appear to be the case. Is there any way I can force the channel to be faulted more quickly after the Service process is killed?
This question Duplex channel Faulted event does not rise on second connection attempt suggestions the Faulted event isn't always fired. And the WCF state flow diagram on MSDN confirms that possibility - http://msdn.microsoft.com/en-us/library/ms789041.aspx
There are many paths to the closed state that don't go through the faulted state. Most likely, when you time out, the Abort() method is being called and you transition from the open state to the closing state without going through the faulted state. Add some logging to check the state throughout execution. If you're trying to reopen the channel after timing out, that would explain why you end up in the faulted state 5 minutes later. To solve your bigger problems, move logic in the FaultedHandler elsewhere so it's executed when you reach the closed state through other paths.
I know the question is old. But I searched quite a lot and always ended up here. So I thought I'd post my findings here:
It depends which Timeout.
If you hit the SendTimeout or ReceiveTimeout of your binding (in my case NetTcpBinding), then yes, the channel will fault.
BUT, if you hit the OperationTimeout of your Service (in my case DuplexChannel) then you will just get a TimeoutException and the channel will NOT fault.
Related
I am trying to control the maximum total duration of a single connection in HttpListener. I am aware of the TimeoutManager property and the 5 or so different timeout values that it contains but it is unclear whether or not setting each of those values will add up to the total places where delay may occur in a connection.
I am looking for something more along the lines of: "If we have a connection that lasts more than x s from the moment of opening the connection until now, abort it without sending anything else or waiting for anything else."
EDIT
To clarify, the scenario that I was experimenting with involves the server trying to send the response and the client not receiving. This causes HttpListenerResponse.OutputStream.Write() to hang indefinitely. I was trying to find a method that I can call from another thread to hard-abort the connection. I tried using OutputStream.Close() and got Cannot Close Stream until all bytes are written. I also tried HttpListenerResponse.Abort() which produced no visible effect.
None of those properties will do what you want. HttpListener is intended to control the request flow, incomming and outgoing data, so it doesn't handle the time between when the response has been fully received and when you send a response, it's your responsability to take care of it.
You should create your own mechanism to abort the request if the total time is higer than the desired one, just a timer can be enough, when a new connection is created enqueue a timer with the total timeout as expiring time, if the request ends before the timer expires cancel the timer, else the timer aborts the request.
In short
How to prevent a duplex callback channel to be closed after an idle period?
In detail
I have a mostly working duplex WCF setup over NetTcpBinding i.e. the client can talk to the server and the server can call back to the client.
Furthermore, I have a reliable session such that the client does not lose the connection to the server after the default period of inactivity, achieved with the following configuration on both client and server:
var binding = new NetTcpBinding(SecurityMode.None);
// Need to prevent channel being closed after inactivity
// i.e. need to prevent the exception: This channel can no longer be used to send messages as the output session was auto-closed due to a server-initiated shutdown. Either disable auto-close by setting the DispatchRuntime.AutomaticInputSessionShutdown to false, or consider modifying the shutdown protocol with the remote server.
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.ReliableSession.Enabled = true;
binding.ReliableSession.InactivityTimeout = TimeSpan.MaxValue;
However, after a period of inactivity of less than half an hour (haven't measured the minimum time exactly), the server is unable to use the callback again - the server just blocks for a minute or so and I do not see any exceptions, while nothing happens on the client side (no evidence of callback).
Leads and root causes?
Note that I can use the callback fine twice in a row consecutively, as long as I do not wait long in between the callback calls.
Are the callbacks configured somewhere else? Do callbacks have their own timeouts etc?
Might it be a blocking/threading issue - need to either set UseSynchronizationContext=false on your client, or avoid blocking while waiting for the message to be received
Should DispatchRuntime.AutomaticInputSessionShutdown be set to false, and if so, how? I'm not really sure how it relates to reliable sessions and I do not know where to access this property
Anything else?
I achieved this by extending the BaseClient class with an automatic keep alive message to be invoked on the target interface when no other calls are made.
We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.
I have an issue with WCF timing out. The strange thing is that my method is actually being called on the server, but the client call ton the object returned from CreateChannel() is timing out with an exception.
The entire error messsage:
This request operation sent to net.pipe://localhost/AndonServer did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.
I could just decrease the timeout setting to 5 seconds, say, but that's a bit dirty. Anyone have any ideas why this might be happening?
Mark
It means you elapsed the timeout period waiting for a reply from the server. By default, all calls in WCF have both a request and a reply, even void methods. The server needs to complete the call promptly so WCF will send a reply. Another is option is to use a one-way call if the client does not require a reply from the server.
Does the OneWay operations in WCF service execute as long as the operation is complete?
By my experiment, I think there is no timeout. I was able to run an operation for half an hour. (I closed after that)
Can someone experienced in WCF answer this? If there is a timeout, where can I specify it
OneWay operations don't wait for a Reply Message. It just writes data to the network connection and returns. So, the only "wait time" would be the time required to write the message to the network.
Be aware though that WCF can still block the client (Clients Blocking with One-Way Operations):
this means that any problem writing the data to the transport prevents the client from returning. Depending upon the problem, the result could be an exception or a delay in sending messages to the service.
Edit: Regarding timeout, they are set on the binding. If your operation can't perform his "Send Message", it can still timeout.
There is no timeout. You have to handle it by yourselves in running operation. Timeouts are related to working with channels but in case of one way operation the message is received and passed to operation and no more interaction with channel will ever occure.