I have a web service slowdown.
My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml.
My client is in .NET
The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability.
I have about 100 clients, response time is 1s mandatory.
Clients have about 1 request per minute.
Clients are checking web service presence by tcp open port test.
So, to avoid possible congestion, I turned gSoap KeepAlive to false.
Until there everything runs fine : I bearly see connections in TCPView (sysinternals)
New special synchronisation program now calls the service in a loop.
It's higher load but everything is processed in less 30 seconds.
With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT.
They slowdown the service and It takes seconds for the service to reply, now.
Could it be that I need to reset the SoapHttpClientProtocol connection ?
Someone has TIME_WAIT ghosts with a web service call in a loop ?
Sounds like you aren't closing the connection after the call and opening new connections on each request. Either close the connection or reuse the open connections.
Be very careful with the implementations mentioned above. There are serious problems with them.
The implementation described in yakkowarner.blogspot.com/2008/11/calling-web-service-in-loop.html (COMMENT ABOVE):
PROBLEM: All your work will be be wiped out the next time you regenerate the web service using wsdl.exe and you are going to forget what you did not to mention that this fix is rather hacky relying on a message string to take action.
The implementation described in forums.asp.net/t/1003135.aspx (COMMENT ABOVE):
PROBLEM: You are selecting an endpoint between 5000 and 65535 so on the surface this looks like a good idea. If you think about it there is no way (at least none I can think of) that you could reserve ports to be used later. How can you guarantee that the next port on your list is not currently used? You are sequentially picking up ports to use and if some other application picks a port that is next on your list then you are hosed. Or what if some other application running on your client machine starts using random ports for its connections - you would be hosed at UNPREDICTABLE points in time. You would RANDOMLY get an error message like "remote host can't be reached or is unavailable" - even harder to troubleshoot.
Although I can't give you the right solution to this problem, some things you can do are:
Try to minimize the number of web service requests or spread them out more over a longer period of time
For your type of app maybe web services wasn't the correct architecture - for something with 1ms response time you should be using a messaging system - not a web service
Set your OS's number of connections allowed to 65K using the registry as in Windows
Set you OS's time that sockets remain in TIME_WAIT to some lower number (this presents its own list of problems)
Related
1- Client application sends a a request to an http server (ashx file, IHttpHandler).
Remark-1: Client is a .dll which will be hosted by other stand alone applications.
Remark-2: Server was first developed as a web service, then for unknown reasons it became very slow, so we implemented it from scratch.
2- Server registers the request in database so that a long duration process is performed on data.
3- Client needs to get notified when the process is finished.
First thing that crossed my mind was implementing a Timer in client. Although I'm not sure if it is ok to do it inside of a host application which is not aware of such usage.
Then it crossed my mind if there may be a something useful in TcpListener or lower layers of socket programming instead of a high frequency timer and flooding server with update requests.
So, I appreciate any suggestion on proper way of doing this task.
UPDATE:
After giving some order to my codes, I update requirements like this:
1- Server "Broadcast"s ID of clients, like: "Client-a, read your instructions", then "Client-c", "Clinet-j",... . This is not mission critical, if a client looses the broadcast, it will return after one minute by a timer tick and will check instructiosn.
2- This server is gona be hosted on a shared hosting plan, at least at first. Solution must be acceptable in boundary of share hosting.
3- preferably all clients connect to the only one socket. No usage of extra resources.
Any recommendation is appreciated.
A while ago I came across an interesting article explaining that putting HttpClient in a using block will dispose of the object when the code has executed but not close the TCP socket and the TCP state will eventually go to TIME_WAIT and stay in that state listing for further activity for 4 minutes (default).
So basically using this multiple times:
using(var client = new HttpClient())
{
//do something with http client
}
results in many open TCP connections sitting in TIME_WAIT.
You can read the whole thing here:
You're using HttpClient wrong and it is destabilizing your software
So I was wondering what would happen if I did the same with the ClientBase<TChannel> derived service class created by Visual Studio when you right-click a project and select add Add Service Reference . . and implemented this:
//SomeServiceOutThere inherits from ClientBase
using (var service = new SomeServiceOutThere())
{
var serviceRequestParameter = txtInputBox.Text;
var result = service.BaddaBingMethod(serviceRequestParameter);
//do some magic (see Fred Brooks quote)
}
However, I haven't been able to recreate exactly the same behavior, and I wonder why.
I created a small desktop app and added a reference to a IIS hosted WCF service.
Next I added a button that basically calls the code via the using code block like I showed above.
After hitting the service the first time, I run netsat for the IP and this is the result:
So far so good. I clicked the button again, and sure enough, new connection established, while the first one went into TIME_WAIT state:
However, after this, every time I hit the service it would use the ESTABLISHED connection, and not open any more like in the HttpClient demo (even when passing different parameters to the service, but keeping the app running).
It seems that WCF is smart enough to realize there is already an established connection to the server, and uses that.
The interesting part is that, when I repeated the process above, but stopped and restarted the application between each call to the service, I did get the same behavior as with HttpClient:
There are some other potential problems with ClientBase (e.g. see here), and I know that temporarily open sockets may not be an issue at all if traffic to the service is relatively low or the server is setup for a large number of maximum connections, but I would still like to be able to reliably test whether this could be a problem or not, and under what conditions (e.g. a running windows service hitting the WCF service vs. a Desktop application).
Any thoughts?
WCF does not use HttpClient internally. WCF probably uses HttpWebRequest because that API was available at the time and it's likely a bit faster since HttpClient is a wrapper around it.
WCF is meant for high performance use cases so they made sure that HTTP connections are reused. Not reusing connections by default is, in my mind, unacceptable. This is either a bug or a design problem with HttpClient.
The 4.6.2 Desktop .NET Framework contains this line in HttpClienthandler.Dispose:
ServicePointManager.CloseConnectionGroups(this.connectionGroupName);
Since this code is not in CoreClr there is no documentation for it. I don't know why this was added. It even has a bug because of this.connectionGroupName = RuntimeHelpers.GetHashCode(this).ToString(NumberFormatInfo.InvariantInfo); in the ctor. Two connectionGroupNames can clash. This is a terrible way obtaining random numbers that are supposed to be unique.
If you restart the process there is no way to reuse existing connections. That's why you are seeing the old connections in a TIME_WAIT state. The two processes are unrelated. For what the code in them (and the OS) knows they are not cooperating in any way. It's also hard to save a TCP connection across process restarts (but possible). No app that I know of does this.
Are you starting processes so often that this might become a problem? Unlikely, but if yes you can apply one of the general workaround such as reducing TIME_WAIT duration.
Replicating this is easy: Just start 100k test processes in a loop.
I could probably setup a couple test-bed applications and find out, but I'm hoping someone has already experienced this or just simply has a more intuitive understanding. I have three executables. Two different clients (call them Client1.exe and Client2.exe) and a WCF service host (call it Host.exe) that hosts what's more or less a message bus type service for the two clients. I won't get into the "why's" as that's a long story and not productive to this question.
The point is this, Client1 sends a request through this service to Client2. Client2 performs operations, then responds with results to Client1. Client1 will always be initiator of requests, so this order of operations will always be consistent this way. This also means that Client1 can open it's channels to communicate to this service as-needed, whereas due to the need of callback services, Client2 has to keep it's channels open. I've began by attempting to keep-alive. However, these are all three on the desktop and PC sleep events, or other issues (not sure) seem to interfere with it. And once it times out, everything has to be restarted which makes it a real pain. I have some ideas I may try to help the keep-alive approach, but this brought up a question that I don't have an answer to... is this the best use of my resources.
The way I figure it, there are two main approaches for Client2,
Keep-Alive with a lot of monitoring (timers and checking of connection states) and connection resetting code which would be faster since it could respond to requests immediately. The downside is this has to be kept alive throughout the time that the user keeps Client2 open on their desktop which could be short and sweet to crazy-long.
Poll periodically for a request which would allow the resources to only be used when checking or processing a request from Client1. This would be slower since poll requests would not be real-time, but would eliminate any external issue concerns disconnecting the service. This would also cause me to have to add more state to the service. It's already a PerSession service with a list of available instances of Client2 ID's so that Client1 knows which instance it's talking to, but it would add more.
Client2 performs many other functions and so still has to be very performant with this process, which makes me wonder which is most likely to cost in resources? Is the polling approach more costly in resources? Or attempting to keep-alive?
I'm writing a client/server architecture where there are going to be possibly hundreds of clients over multiple virtual machines, mostly on the intranet but some in other locations.
Each client will be gathering data constantly and sending a message to a server every second or so. Each message will probably be about 128 characters or so in length.
My question is, for this architecture where I am writing both client/server in .NET is should I go with WCF or some socket code I've written previously. I need scalability (which the socket code has in mind), reliability and just the ability to handle that many messages.
I would not make final decision without peforming some proof of concept. Create very simple service, host it and use some stress test to get real performance results. Than validate results against your requirements. You have mentioned amount of messages but you didn't mentioned expected response time. There is currently discussed similar question on MSDN forum which complains about slow response time of WCF compared to sockets.
Other requirements are not directly mentioned in your post so I will make some assumption for best performance:
Use netTcpBinding - best performance, binary encoding, requires .NET server / clients. I guess you are going to use Net.Tcp because your other choice was direct socket programming.
Don't use security if you don't have to - reduces performance. Probably not possible for clients outside your intranet.
Reuse proxy on clients if possible. Openning TCP connection is expensive if you reuse the same proxy you will have single connection per proxy. This will affect instancing of you services - by default single service instance will handle all requests from single proxy.
Set service throttling so that your service host is ready for many clients
Also you should make some decisions about load balancing. Load balancing for WCF net.tcp connections requires sticky sessions (session affinity) so that after openning the channel client always calls the service on the same server (bacause instance of that service was created only on single server).
100 requests per second does not sound like much for a WCF service, especially with that little payload. But it should be quite quick to setup a simple setup with a WCF service with one echo method just returning the input and then hook up a client with a bunch of threads and a loop.
If you already have a working socket implementation you might keep it, but otherwise you can pick WCF and spend your precious development time elsewhere.
From my experience with WCF, i can tell you that it's performance on high load is very very nice. Especially you can chose between several bindings to achieve your requirements for the different scenarios (httpBinding for outside communication, netPeerTcpBinding in local network e.g.).
Our server application is listening on a port, and after a period of time it no longer accepts incoming connections. (And while I'd love to solve this issue, it's not what I'm asking about here;)
The strange this is that when our app stops accepting connections on port 44044, so does IIS (on port 8080). Killing our app fixes everything - IIS starts responding again.
So the question is, can an application mess up the entire TCP/IP stack? Or perhaps, how can an application do that?
Senseless detail: Our app is written in C#, under .Net 2.0, on XP/SP2.
Clarification: IIS is not "refusing" the attempted connections. It is never seeing them. Clients are getting a "server did not respond in a timely manner" message (using the .Net TCP Client.)
You may well be starving the stack. It is pretty easy to drain in a high open/close transactions per second environment e.g. webserver serving lots of unpooled requests.
This is exhacerbated by the default TIME-WAIT delay - the amount of time that a socket has to be closed before being recycled defaults to 90s (if I remember right)
There are a bunch of registry keys that can be tweaked - suggest at least the following keys are created/edited
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
TcpTimedWaitDelay = 30
MaxUserPort = 65534
MaxHashTableSize = 65536
MaxFreeTcbs = 16000
Plenty of docs on MSDN & Technet about the function of these keys.
You haven't maxed out the available port handles have you ?
netstat -a
I saw something similar when an app was opening and closing ports (but not actually closing them correctly).
Use netstat -a to see the active connections when this happens. Perhaps, your server app is not closing/disposing of 'closed' connections.
Good suggestions from everyone, thanks for your help.
So here's what was going on:
It turns out that we had several services competing for the same port, and most of the time the "proper" service would get the port. Occasionally a second service would grab the port away, and the first service would try to open a different port. From that time on, the services would keep grabbing new ports every time they serviced a request (since they weren't using their preferred ports) and eventually we would exhaust all available ports.
Of course, the actual question was: "Can an application mess up the entire TCP/IP stack?", and the answer to that question is: Yes. One way to do it is to listen on a whole bunch of ports.
I guess the port number comment from RichS is correct.
Other than that, the TCP/IP stack is just a module in your operating system and, as such, can have bugs that might allow an application to kill it. It wouldn't be the first driver to be killed by a program.
(A tip to the hat towards Andrew Tanenbaum for insisting that operating systems should be modular instead of monolithic.)
I've been in a couple of similar situations myself. A good troubleshooting step is to attempt a connection from the affected machine to good known destination that isn't at that moment experiencing any connectivity issues. If the connection attempt fails, you are very likely to get more interesting details in the error message/code. For example, it could say that there aren't enough handles, or memory.
From a support and sys admin standpoint, I have only seen this on the rarest of occasions (more than once), but it certainly can happen.
When you are diagnosing the problem, you should carefully eliminate the possible causes, rather than blindly rebooting the system at the first sign of trouble. I only say this because many customers I work with are tempted to do that.