WCF Channel: Server not responding after 10 minutes - c#

I created an architecture formed by a client and a server those communicates on a WCF Channel in localhost, all works fine, but if there is no activity (requests from client) between the two ones for more than 10 minutes the server doesn't respond anymore. The connection is still alive but simply server is not responding to client request, so the client must disconnect and reconnect for being able to send request to the server. Maybe I let some parameters slip.
The address I used is: net.tcp://localhost:8080/ICS;
Channel type: duplex;

The problem here is in receiveTimeout. The service host uses this timeout to determine when to drop idle connections. If no message is received within the configured time span the connection is closed. By default it is 10 minutes.

Update, ReliableMessaging is not enabled therefore edit InactivityTimeout makes no sense
Whereas changing ReceiveTimeout parameter of my binding settings solves the problem.
My code:
var bind = new NetTcpBinding(); // my binding instance
var relSessionEnabled = bind.ReliableSession.Enabled; // this is false
var inactivityTimeout = bind.ReliableSession.InactivityTimeout; // this is 10 minutes
bind.ReceiveTimeout = TimeSpan.MaxValue; // this was 10 minutes before this instructuion

Related

Rabbit MQ idle connection dropped

I have a .NET windows service running as a consumer/subscriber which is listening to a queue for messages.
The windows service is running on the same machine as where rabbit mq server s/w is installed.
The queue if idle for 60 minutes results in the connection for it being dropped (i know this as i monitor the UI dashboard) and puts the windows service into a bad state.
This is proving to be frustrating to resolve.
I have applied heart beat setting on the rabbit mq client but this has had no effect.
The following error is what i get in the log file when connection drops
=ERROR REPORT==== 22-Aug-2017::12:20:29 ===
closing AMQP connection <0.1186.0> ([FE80::C00E:F801:A2A7:8530]:61481 ->
[FE80::C00E:F801:A2A7:8530]:5672):
missed heartbeats from client, timeout: 30s
Rbbit mq server log file settings:
[{rabbit,[ {heartbeat, 60}]}].
Client code:
var connectionFactory = new ConnectionFactory
{
HostName = hostName,
UserName = userName,
Password = password,
RequestedHeartbeat = heartBeat,
AutomaticRecoveryEnabled = true,
NetworkRecoveryInterval = TimeSpan.FromSeconds(numberOfSecondsInterval),
RequestedConnectionTimeout = RequestedConnectionTimeoutInMiliseconds
};
if (port > 0)
connectionFactory.Port = port;
var connection = connectionFactory.CreateConnection();
var model = connection.CreateModel();
model.BasicQos(prefetchSize: 0, prefetchCount: 1, global: false);
return new Tuple<IConnection, IModel>(connection, model);
heartbeat value above is set to 30 seconds,
network recovery value is set to 10 seconds &
request connection time out is set to 2 seconds
I don't know what else I'm missing here in terms of configuration??
The server where above is running from is Windows 2012 R2
Basically I'm expecting that to see the connections remain in place always regardless of idle time.
Is there a Windows OS level TCP keep-alive setting I need to make sure is in place as well?
Rabbit MQ version is 3.6.8
I managed to successfully stop the idle connections from dropping (after 60 mins) on RabbitMQ server by applying the re-connect logic that was referenced in this SO post.
To note: The answer was updated to state that the latest version of RabbitMQ client has auto connection recovery enabled so manual re-connection logic should not be needed. This was not true in my case as i had applied these settings already but i still saw the connections dropping after 60 minutes idle time. The client and the server in my scenario are on the same machine.
If anyone by any chance knows where the 60 minutes idle time setting is coming from i would be grateful, i scanned all the rabbitmq config settings and could not find anything related to it.

WCF PerCall not firing Faulted event

I am using wsHttpBinding with InstanceContextMode.PerCall. On the server side I need to detect if the client has properly received the response. If not I need to clear some stuff in the server side. For this I tried to use OperationContext.Current.InstanceContext.Closed and OperationContext.Current.InstanceContext.Faulted. OperationContext.Current.InstanceContext.Closed fires always but .Faulted event never fires no matter what I do (timeout the connection, close the client while still receiving the data, unplug the cable). Also OperationContext.Current.InstanceContext.State is always closed.
How can I detect that the client did not received the request ?
As an alternative solution I was thinking if I catch error on the client side, to call a cleanup method on the server side, but this may complicate things because the internet connection may be down for a while.
Channel events on the service side are fired when the client calls Close() successfully, you will not get channel events fired on the server side because of client faults and timeout until the moment the server replies, which is very stupid.
To solve your problem you have three solutions:
1- if the client is yours, use duplex to call a dummy method on the client then the channel will fire its events.
2- use reliable session (sessionfull binding + reliable session turned on) so you can set the "InactivityTimeout" which will fire when infra messages like KeepAlive messages are not coming anymore. I guess InactivityTimeout default is 10 minutes, while Windows KeepAlive frequency is 1 second, so you can set InactivityTimeout to 5 seconds for example.
3- use IIS hosting with one of the http bindings, so inside your operation create a loop that keeps checking "ClientDisconnectedToken.IsCancellationRequested" in "HttpContext.Current.Response" in "System.Web" namespace.

How to make client automatically detect if WCF service is down or lost connection

I've looked at a bunch of threads like Detect if wcf service is activated but these solutions require the client to proactively detect if the WCF service is running. But what if I am in the middle of a transaction and the WCF service goes down or the connection is lost for some reason? In my testing there is no exception thrown; either nothing happens at all or that twirly circle thing just keeps going round and round. I want the client to detect if the service/connection is lost and gracefully tell the user it's down. I have timeouts set in my code:
NetNamedPipeBinding binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
binding.OpenTimeout = TimeSpan.FromSeconds(15);
binding.SendTimeout = TimeSpan.FromSeconds(3000);
binding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
this._engineChannel = new DuplexChannelFactory<IEngineApi>(this, binding, new EndpointAddress("net.pipe://localhost/Engine"));
But if I am in the middle of a transaction nothing actually happens; these timeouts don't seem to affect anything.
You can use one of the two approaches:
1
The two things I do are a telnet check to make sure the WCF process
has the socket open.
telnet host 8080 The second thing I do is always add an IsAlive method
to my WCF contract so that there is a simple method to call to check
that the service host is operating correctly.
public bool IsAlive() {
return true; }
Source: Pinging WCF Services
2
Use the Discovery/Announcement feature introduced in WCF 4.0
Discovery depends on the User Datagram Protocol (UDP). UDP is a connectionless protocol, and there is no direct connection required between the client and server. The client usages UDP to broadcast finding requests for any endpoint supporting a specified contract type. The discovery endpoints that support this contract will receive the request. The implementation of the discovery endpoint responds back to the client with the address of the service endpoints. Once the client determines the services, it invokes the service to set up call.
Simple usage example: http://www.codeproject.com/Articles/469549/WCF-Discovery

Faulted WCF duplex callback after inactivity - keep alive long-running push notifications

In short
How to prevent a duplex callback channel to be closed after an idle period?
In detail
I have a mostly working duplex WCF setup over NetTcpBinding i.e. the client can talk to the server and the server can call back to the client.
Furthermore, I have a reliable session such that the client does not lose the connection to the server after the default period of inactivity, achieved with the following configuration on both client and server:
var binding = new NetTcpBinding(SecurityMode.None);
// Need to prevent channel being closed after inactivity
// i.e. need to prevent the exception: This channel can no longer be used to send messages as the output session was auto-closed due to a server-initiated shutdown. Either disable auto-close by setting the DispatchRuntime.AutomaticInputSessionShutdown to false, or consider modifying the shutdown protocol with the remote server.
binding.ReceiveTimeout = TimeSpan.MaxValue;
binding.ReliableSession.Enabled = true;
binding.ReliableSession.InactivityTimeout = TimeSpan.MaxValue;
However, after a period of inactivity of less than half an hour (haven't measured the minimum time exactly), the server is unable to use the callback again - the server just blocks for a minute or so and I do not see any exceptions, while nothing happens on the client side (no evidence of callback).
Leads and root causes?
Note that I can use the callback fine twice in a row consecutively, as long as I do not wait long in between the callback calls.
Are the callbacks configured somewhere else? Do callbacks have their own timeouts etc?
Might it be a blocking/threading issue - need to either set UseSynchronizationContext=false on your client, or avoid blocking while waiting for the message to be received
Should DispatchRuntime.AutomaticInputSessionShutdown be set to false, and if so, how? I'm not really sure how it relates to reliable sessions and I do not know where to access this property
Anything else?
I achieved this by extending the BaseClient class with an automatic keep alive message to be invoked on the target interface when no other calls are made.

MSMQ Poison message and TimeToReachQueue

I tried creating a poison message scenario in the following manner.
1- Created a message queue on a server (transactional queue).
2- Created a receiver app that handles incoming messages on that server.
3- Created a client app located on a client machine which sends messages to that server with the specific name for the queue.
4- I used the sender client app with the following code (C# 4.0 framework):
System.Messaging.Message mm = new System.Messaging.Message("Some msg");
mm.TimeToBeReceived = new TimeSpan(0, 0, 50);
mm.TimeToReachQueue = new TimeSpan(0, 0, 30);
mm.UseDeadLetterQueue = true;
mq.Send(mm);
So this is setting the timeout to reach queue to 30 seconds.
First test worked fine. Message went through and was received by the server app.
My second test, I disconnected my ethernet cable, then did another send from the client machine.
I can see in the message queue on the client machine that the message is waiting to be sent ("Waiting for connection"). My problem is that when it goes beyond the 30 sec (or 50sec too), the message never goes in the Dead-letter queue on the client machine.
Why is it so ? ... I was expecting it to go there some it timed-out.
Tested on Windows 7 (client) / Windows server 2008 r2 (server)
Your question is a few days old already. Did you find out anything?
My interpretation of your scenario would be that the unplugged cable is the key.
In the scenario John describes, there is an existing connection and the receiver could not process the message correctly within the set time limit.
In you scenario, however, the receiving endpoint never gets the chance to process the message, so the timeout can never occur. As you said, the state of the message is Waiting for connection. A message that was never sent cannot logically have a timeout to reach its destination.
Just ask yourself, how many resources Windows/ MSMQ would unneccessaryly sacrifice - and how often - to check MessageQueues for how-many conditions if the queues is essentially inactive? There might be a lot of queues with a lot of messages on a system.
The behavior I would expect is that if you plug the network cable back in and the connection is re-established that then, only when it is needed, your poison message wil be checked for the timeout and eventually moved to the DeadLetter queue.
You might want to check this scenario out - or did you already check it out the meantime?

Categories

Resources