Any Downside to Increasing "maxconnection" Setting in system.net? - c#

Our system was having a problem with WCF connections being limited, which was solved by this answer. We added this setting to the client's web.config, and the limit of two concurrent connections went away:
Outside of the obvious impacts (e.g. overloading the server), are there any downsides to setting this limit to a number (possibly much) higher than the default "2"? Any source on the reasoning for having the default so low to begin with?

In general, it's OK to raise the client connection limit, with a few caveats:
If you don't own the server, then be careful because your client app might be confused with a DoS attack which might lead to your client IP address being blocked by the server. Even if you own the server, this is sometimes a risk-- for example, we've had cases where a bug in our app's login page caused multiple requests to be issued when the user held down the Enter key. This caused these users to get blocked from our app because of our firewall's DoS protection!
Connections aren't free. They take up RAM, CPU, and other scarce resources. Having 5 or 10 client connections isn't a problem, but when you have hundreds of open client connections then you risk running out of resources on the client.
Proxies or edge servers between client and server may impose their own limits. So you may try to open 1,000 simultaneous connections only to have #5 and later refused by the proxy.
Sometimes, adding more client connections is a workaround for an architectural problem. Consider fixing the architectural problem instead. For example, if you're opening so many connections because each request takes 10 minutes to return results, then you really should look at a more loosely-coupled solution (e.g. post requests to a server queue and come back later to pick up results) because long-lived connections are vulnerable to network disruption, transient client or server outages, etc.
Being able to open many simultaneous connections can make it risky to restart your client or server app, because even if your "normal" load is only X requests/sec, if either client or server has been offline for a while, then the client may try to catch up on pending requests by issuing hundreds or thousands of requests all at once. Many servers have a non-linear response to overload conditions, where an extra 10% of load may reduce response time by 100%, creating a runaway overload condition.
The solution to all these risks is to carefully load-test both client and server with the maximum # of connections you want to support... and don't set your connection limit higher than what you've tested. Don't just crank the connection limit to 1,000,000 just because you can!
To answer the other part of your question, the default limit of 2 connections goes back to a very old version of the HTTP specification which limited clients to 2 connections per domain, so that web browsers wouldn't swamp servers with a lot of simultaneous connections. For more details, see this answer: https://stackoverflow.com/a/39520881/126352

Related

error message trying to connect a few hundred clients to a single server all at once (socket error 10061)

I've created a server-client (well I created the client, my coworker created the server in a different language) application. When I try to connect 900+ clients (all individual TCP clients) to the server I get the following excpetion:
No connection could be made because the target computer actively refused it.
Socket error: 10061
WSAECONNREFUSED
Connection refused. No connection could be made because the target computer actively refused it. This usually results from trying to connect to a service that is inactive on the foreign host—that is, one with no server application running.
Eventually, if I wait long enough they will all connect (because we've made our own reconnect/keep alive) on top of the TCP socket. So if it fails it will simply try again till it succeeds.
Do I get this error because the server is 'overloaded' and can't handle all the requests all at once (i'm creating 900 clients each in a separate thread so it's pretty much all trying to connect simultaneously).
Is there a way to counter act this? can we tweak a TCP Socket option so it can handle more clients all at once?
It might also be good to note that i'm running the server and client on the same machine, running it on different machines seems to reduce the number of error messages. Which is why I think this is some sort of capacity problem because the server can't handle them all that fast.
Do I get this error because the server is 'overloaded' and can't
handle all the requests all at once (i'm creating 900 clients each in
a separate thread so it's pretty much all trying to connect
simultaneously).
Yes, that is insane(creating 900 individual threads, you should create thread pool using ConcurrentQueue and limit the queue)!
You can increase the number of backlogs using Socket.Listen(backlog);, where backlog is the maximum length of the pending connections queue. The backlog parameter is limited to different values depending on the Operating System. You may specify a higher value, but the backlog will be limited based on the Operating System.
Is there a way to counter act this? can we tweak a TCP Socket option
so it can handle more clients all at once?
Not in this case(here it is 900 request already); but, in general - YES, provide more backlog in the Socket.Listen(backlog) for other cases having lesser backlog. Also, use a connection pool(already managed by the .NET runtime environment) and a ConcurrentQueue for handling threads in order. But, remember, you can't provide more backlog than the number which is limited by the OS. 900 is way too much for simpler machines!
It might also be good to note that i'm running the server and client
on the same machine, running it on different machines seems to reduce
the number of error messages.
It'll, because the load has been distributed on 2 different machines. But, still you shouldn't try what you're doing. Create a ConcurrentQueue to manage the threads(in order).

Websockets persistant connection

Since the connection is persistent, i understand a lot of network congestion is prevented in setting up the new connection, in cases like periodic polling of hundreds of servers.
I have a simple question. Does not it put load on the both server and client to keep the connection persistent for a long time ? Is the gain made lost??
A TCP (and hence WebSocket) connection established to a server, but not sending or receiving (sitting idle), does consume memory on the server, but no CPU cycles.
To keep the TCP connection alive (and also "responsive") on certain network environment like mobile may require periodic sending/receiving of small amounts of data. E.g. WebSocket has built-in ping/pong (non app data) messages for that. Doing so then will consume some CPU cycles, but not a lot.
Persistent connections are a tradoff.
Yes, they require the server to store the state associated with each connection, they require maintenance (such as keep-alive packets or websocket pings), and they require monitoring (to detect state changes or arriving information). So you spend some memory and CPU resources per connection.
BUT they save a lot of time, and often resources, on connection re-initializations; once established, they allow both parties to send and receive information as opposed to non-persistent client-server systems like classic HTTP.
So it really depends on the system you're building. If your system has millions of users that need connectivity to the server only once in a while, then the benefit of keeping these connections open is probably not worth the extra resources. But if you're designing something like a chat server for hundred people, then the additional responsiveness is probably worth it.

Are TCP Connections resource intensive?

I have a TCP server that gets data from one (and only one) client. When this client sends the data, it makes a connection to my server, sends one (logical) message and then does not send any more on that connection.
It will then make another connection to send the next message.
I have a co-worker who says that this is very bad from a resources point of view. He says that making a connection is resource intensive and takes a while. He says that I need to get this client to make a connection and then just keep using it for as long as we need to communicate (or until there is an error).
One benefit of using separate connections is that I can probably multi-thread them and get more throughput on the line. I mentioned this to my co-worker and he told me that having lots of sockets open will kill the server.
Is this true? Or can I just allow it to make a separate connection for each logical message that needs to be sent. (Note that by logical message I mean an xml file that is of variable length.)
It depends entirely on the number of connections that you are intending to open and close and the rate at which you intend to open them.
Unless you go out of your way to avoid the TIME_WAIT state by aborting the connections rather than closing them gracefully you will accumulate sockets in TIME_WAIT state on either the client or the server. With a single client it doesn't actually matter where these accumulate as the issue will be the same. If the rate at which you use your connections is faster than the rate at which your TIME_WAIT connections close then you will eventually get to a point where you cannot open any new connections because you have no ephemeral ports left as all of them are in use with sockets that are in TIME_WAIT.
I write about this in much more detail here: http://www.serverframework.com/asynchronousevents/2011/01/time-wait-and-its-design-implications-for-protocols-and-scalable-servers.html
In general I would suggest that you keep a single connection and simply reopen it if it gets reset. The logic may appear to be a little more complex but the system will scale far better; you may only have one client now and the rate of connections may be such that you do not expect to suffer from TIME_WAIT issues but these facts may not stay the same for the life of your system...
The initiation sequence of a TCP connection is a very simple 3 way handshake which has very low overhead. No need to maintain a constant connection.
Also having many TCP connections won't kill your server so fast. modern hardware and operating systems can handle hundreds of concurrect TCP connections, unless you are afraid of Denial of service attacks which are out of the scope of this question obviously.
If your server has only a single client, I can't imagine in practice there'd be any issues with opening a new TCP socket per message. Sounds like your co-worker likes to prematurely optimize.
However, if you're flooding the server with messages, it may become an issue. But still, with a single client, I wouldn't worry about it.
Just make sure you close the socket when you're done with it. No need to be rude to the server :)
In addition to what everyone said, consider UDP. It's perfect for small messages where no response is expected, and on a local network (as opposed to Internet) it's practically reliable.
From the servers perspective, it not a problem to have a very large number of connections open.
How many socket connections can a web server handle?
From the clients perspective, if measuring shows you need to avoid the time initiate connections and you want parallelism, you could create a connection pool. Multiple threads can re-use each of the connections and release them back into the pool when they're done. That does raise the complexity level so once again, make sure you need it. You could also have logic to shrink and grow the pool based on activity - it would be ashame to hold connections open to the server over night while the app is just sitting their idle.

How can I optimize SSL session so I can reuse it later (if needed) to improve Client Server performance

I have a server running on Windows Azure here with a large key (link is intended to demonstrate large key in SSL cert). Based on this Security.SE conversation the larger key will be more expensive to setup and tear down from a CPU perspective.
Assuming I'm using a .NET client and a .NET server; what changes should I make (if any) to reduce the overhead of connecting / disconnecting an SSL perspective.
For the purpose of this conversation let's include these scenarios (add more if you can think of them)
WebBrowser to IIS
WCF client to WCF Server (IIS)
WCF client to WCF TCP
Sockets-based client to Sockets-based server
The cost of an initial handshake is basically fixed (given certain parameters). The cost of a resumed handshake is approximately zero.
The way to improve performance is to increase the amount of sessions that are resumed sessions, and not initial sessions. This amortizes the cost of the initial handshake across the resumed handshakes, reducing the average handshake cost.
The easiest way to increase the resumed handshake rate is to have a larger session cache size/timeout. Of course, having a large session cache can create its own performance issues. One needs to find a good balance between these two, and the best way to do that is with testing.
If the application is made to keep the WCF connections open, it may make sense to enable KeepAlive (it's disabled by default).
The TCP connection will be reused automatically when the keep-alive switch is turned on. For the ‘ServicePoint Manager, you can use theSetTcpKeepAlive method to turn on the keep-alive option for a TCP connection. Refer to the following MSDN article:
ServicePointManager.SetTcpKeepAlive Method
http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.settcpkeepalive.aspx
From Microsoft:
Generally the difference, in the perspective of performance, between common HTTP and HTTPS lies in the handshake of a TCP connection. It takes longer time for an HTTPS handshake, than HTTP. However, after the TCP connection is established, their difference is very trivial as a block cipher will be used in this connection. And the difference between a ‘very high bit’ cert and a common cert is more trivial. We’ve dealt with a lot of slow performance cases, but we seldom haves cases whose slow-performance problem is caused by more stronger cert, as the network congestion, the CPU high utilization, a large portion of ViewState data etc. are main characters of slow performance.
In the perspective of IIS, notice that in the IIS manager, there will be an option checked by default for a website, as ‘Enable HTTP Keep-Alives’. This option ensures that the IIS and the client browser would keep the TCP connection alive for a time for certain HTTP requests. That is to say, for round-trips between an IIS server and the client, only the first request will be obviously slower than others, while the rest won’t.
You can refer to following article about this setting:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d7e13ea5-4350-497e-ba34-b25c0e9efd68.mspx?mfr=true
Of course, I know for WCF, IIS is not a must to host applications for many scenarios, but on this point, I think they work similarly.

web service slowdown

I have a web service slowdown.
My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml.
My client is in .NET
The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability.
I have about 100 clients, response time is 1s mandatory.
Clients have about 1 request per minute.
Clients are checking web service presence by tcp open port test.
So, to avoid possible congestion, I turned gSoap KeepAlive to false.
Until there everything runs fine : I bearly see connections in TCPView (sysinternals)
New special synchronisation program now calls the service in a loop.
It's higher load but everything is processed in less 30 seconds.
With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT.
They slowdown the service and It takes seconds for the service to reply, now.
Could it be that I need to reset the SoapHttpClientProtocol connection ?
Someone has TIME_WAIT ghosts with a web service call in a loop ?
Sounds like you aren't closing the connection after the call and opening new connections on each request. Either close the connection or reuse the open connections.
Be very careful with the implementations mentioned above. There are serious problems with them.
The implementation described in yakkowarner.blogspot.com/2008/11/calling-web-service-in-loop.html (COMMENT ABOVE):
PROBLEM: All your work will be be wiped out the next time you regenerate the web service using wsdl.exe and you are going to forget what you did not to mention that this fix is rather hacky relying on a message string to take action.
The implementation described in forums.asp.net/t/1003135.aspx (COMMENT ABOVE):
PROBLEM: You are selecting an endpoint between 5000 and 65535 so on the surface this looks like a good idea. If you think about it there is no way (at least none I can think of) that you could reserve ports to be used later. How can you guarantee that the next port on your list is not currently used? You are sequentially picking up ports to use and if some other application picks a port that is next on your list then you are hosed. Or what if some other application running on your client machine starts using random ports for its connections - you would be hosed at UNPREDICTABLE points in time. You would RANDOMLY get an error message like "remote host can't be reached or is unavailable" - even harder to troubleshoot.
Although I can't give you the right solution to this problem, some things you can do are:
Try to minimize the number of web service requests or spread them out more over a longer period of time
For your type of app maybe web services wasn't the correct architecture - for something with 1ms response time you should be using a messaging system - not a web service
Set your OS's number of connections allowed to 65K using the registry as in Windows
Set you OS's time that sockets remain in TIME_WAIT to some lower number (this presents its own list of problems)

Categories

Resources