I have a c# application that the client uses wcf to talk to the server. In the background every X seconds the client calls a Ping method to the server (through WCF). The following error has reproduced a couple of times (for different method calls):
System.ServiceModel.ProtocolException: A reply message was received for operation 'MyMethodToServer' with action 'http://tempuri.org/IMyInterface/PingServerResponse'. However, your client code requires action 'http://tempuri.org/IMyInterface/MyMethodToServerResponse'.
MyMethodToServer is not consistent and it falls on different methods.
How can this happen that a request receives a different response?
I think you have a pretty mess problem with async communication, main suggestion (as your question isn't clear very well), is try to identify every request, catch the calls and waiting for them, do asyncronic communication and getting a several work with threading.
As you present it, is a typical architecture problem.
If you present more code, can I suggest some code fixing in my answer and I'll be glad to update my answer.
If this occurs randomly and not you consistently, you might be running in a load-balanced setup, and deployed an update to only one of the servers?
Wild guess: your client uses same connection to do two requests in parallel. So what happens is:
Thread 1 sends request ARequest
Thread 2 sends request BRequest
Server sends reply BReply
Thread 1 receives reply BReply while expecting AReply
If you have request logs on the server, it'll be easy to confirm - you'll likely see two requests coming with short delay from the client host experiencing the issue
I think MaxConcurrentCall and ConcurrencyMode may be relevant here (although I did not touch WCF for a long while)
Related
I'm using NServiceBus with RabbitMQ in my project. I have two services that don't know about each other and don't share anything. service1 publishes request messages to endpoint1 (queue1) and service2 listens to endpoint1 and publishes responses to endpoint2 (queue2). There are two questions:
How can service1 handle responses from service2 if service1 doesn't know the response message type but only expects some particular fields in the response message?
I want to create an async API method that sends a request to endpoint1 and waits for the response in endpoint2. Is it somehow possible at all? Also how can I ensure that the reply corresponds with the request?
I expect something like:
public async Task<object> SendRequest(string str) {
var request = new MyRequest(str);
await endPoint1.Publish(request);
var reply = await endPoint2.WaitingReply();
return reply;
}
I will appreciate any help.
Whenever two things communicate, there is always a contract. When functions call each other the contract is the parameters that are required to call that function. With messaging the message is the contract. The coupling is towards the message, not the sender or receiver.
I'm not really sure what you're trying to achieve? You mention an API which is async and endpoint1 and endpoint2.
First of all, there's asynchronous execution and asynchronous communication. The async part in your example code is asynchronous execution of two methods that have the word await in front of them. When we talk about sending messages, that's asynchronous communication. A message is put on the queue and then the code moves on and never looks back at the message. Even when you use the request/reply pattern, no code is actually waiting for a message.
You can wait for a message by blocking the thread, but I highly recommend you avoid that and not use the NServiceBus callback feature. If you think you have to, think again. If you still think so, read the red remarks on that page. If they can't convince you, contact Particular Software to have them explain another time why not. ;-)
It could be that you need a reply message for whatever reason. If you build some website using SignalR (for example) and you want to inform the user on the website when a message returned and some work was completed, you can wait for a reply message. The result is that the website itself becomes an endpoint.
So if the website is EndpointA and it sends a message to EndpointB, it is possible to reply to that message. EndpointA would then also need a message handler for that message. If EndpointB first needs to send a message to EndpointC, which in turn responds to EndpointB and only then it replies back to EndpointA, NServiceBus can't easily help. Not because it's impossible, but because you probably need another solution. EndpointA should probably not be waiting for that many endpoints to reply, so many things could go "wrong" and take too much time.
If you're interested to see how replies work in combination with SignalR and what not, you can check a demo I built for a presentation that has that.
I have a REST service in a self hosted ASP.Net WebApi application (Console).
Some clients poll the server in specific intervals to fetch new data. In general all is working fine.
The problem is, that the server stops responding to requests after some random duration (~30mins - 2.5 hours). All client requests start to time out.
The weird thing is, the server doesn't seem to receive the requests anymore as no controller method is invoked anymore). Server didn't throw any exceptions and the console app is still responsive. So I can only suppose there is a problem, before the request reaches the API controller.
In the debugger everything seems fine.
How can I diagnose such an issue?
What else can I try to fix the described behavior?
Notes:
Tested on multiple systems
.Net 4.5.1
Asp.Net WebApi 5.1.2
I have found the issue, the reason this is happening is because of connection leaks. If you are sending requests and aren't closing them correctly, either after the request is finished, or within an exception, the amount of open connections will eventuelly reach it's max value. Either you change the max amount of open connections in the connectionstring or(the prefered way) make sure your code is handling the closing part:
SqlConnection myConnection = new SqlConnection(ConnectionString);
try
{
conn.Open();
someCall (myConnection);
}
finally
{
myConnection.Close();
}
Credit goes to How can I solve a connection pool problem between ASP.NET and SQL Server? Where you can read more about this.
In my case, the issue was caused by never ending tasks. Due a misusage of the ReactiveExtensions Api, I randomly created never ending tasks. It seems, at some point the task scheduler simply couldn't handle them anymore, although I'm not completely sure about that.
Thing learned: It seems, by doing bad things in your app code (too many tasks, SQL connections ...) you can kill the WebApi infrastructure, so that it doesn't handle requests - at any level - anymore.
We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.
I need such scenario: client sends message to server, not waiting for response, and don't care, if message was send properly.
using(host.RemoteService client = new host.RemoteService())
{
client.Open();
cliend.SendMessage("msg");
}
in scenario when firewall is on, or there is no connection to the internet, client dies at "SendMessage". I mean program stops to respond. I wish program don't care about the result. I mean if there is no connection, i wish program to go further, omitting "SendMessage" or sth like that.
What should I do, is there any solution for non blocking method?
Try something like this in your service contract:
[OperationContract(IsOneWay=true)]
void Send(string message);
See the following link:
One Way Operation in WCF
Edit: OP was already using my suggested solution.
Suggested approaches to solve the issue - taken from MSDN (One-Way Services):
Clients Blocking with One-Way Operations
It is important to realize that while some one-way applications return
as soon as the outbound data is written to the network connection, in
several scenarios the implementation of a binding or of a service can
cause a WCF client to block using one-way operations. In WCF client
applications, the WCF client object does not return until the outbound
data has been written to the network connection. This is true for
all message exchange patterns, including one-way operations; this
means that any problem writing the data to the transport prevents the
client from returning. Depending upon the problem, the result could
be an exception or a delay in sending messages to the service.
You can mitigate some of this problem by inserting a buffer between
the client object and the client transport's send operation. For
example, using asynchronous calls or using an in-memory message
queue can enable the client object to return quickly. Both
approaches may increase functionality, but the size of the thread pool
and the message queue still enforce limits.
It is recommended, instead, that you examine the various controls on
the service as well as on the client, and then test your application
scenarios to determine the best configuration on either side. For
example, if the use of sessions is blocking the processing of messages
on your service, you can set the
System.ServiceModel.ServiceBehaviorAttribute.InstanceContextMode
property to PerCall so that each message can be processed by a
different service instance, and set the ConcurrencyMode to
Multiple in order to allow more than one thread to dispatch messages
at a time. Another approach is to increase the read quotas of the
service and client bindings.
Modify your attribute
[OperationContract(IsOneWay=true)]
I'm trying to write an asynch socket application which transfering complex objects over across sides..
I used the example here...
Everything is fine till i try send multi package data. When the transferred data requires multiple package transfer server application is suspending and server is going out of control without any errors...
After many hours later i find a solution; if i close client sender socket after each EndSend callback, the problem is solving. But i couldn't understand why this is necessary? Or are there any other solution for the situation?
My (2) projects is same with example above only i changed EndSend callback method like following:
public void EndSendCallback(IAsyncResult result)
{
Status status = (Status)result.AsyncState;
int size = status.Socket.EndSend(result);
status.Socket.Close(); // <--------------- This line solved the situation
Console.Out.WriteLine("Send data: " + size + " bytes.");
Console.ReadLine();
allDone.Set();
}
Thanks..
This is due to the example code given not handling multiple packages (and being broken).
A few observations:
The server can only handle 1 client at a time.
The server simply checks whether the data coming in is in a single read smaller than the data requested and if so, assumes that's the last part.
The server then ignores the client socket while leaving the connection open. This puts the responsibility of closing the connection on the client side which can be confusing and which will waste resources on the server.
Now the first observation is an implementation detail and not really relevant in your case. The second observation is relevant for you since it will likely result in unexplained bugs- probably not in development- but when this code is actually running somewhere in a real scenario. Sockets are not streamlined. When the client sents over 1000 bytes. This might require 1 call to read on the server or 10. A call to read simply returns as soon as there is 'some' data available. What you need to do is implement some sort of protocol that communicates either how much data is being sent over- or when all the data has been sent over. I really recommend just to stick with the HTTP protocol since this is a well tested and well supported protocol that suits most scenario's.
The third observation might also cause bugs where the server is running out of resources since it leaves all connections open.