C# SSL Stops Receiving (But Connection doesn't break) - c#

I have a game server system that has a sslstream module and reads/writes using async calls. The server holds up great, functions perfectly fine, however, after a period of increased traffic the user's SSL simply stops receiving on the server end.
However, the client still gets sent data, and sends data fine, for them it seems like the game still works fine for them until they can't deal damage.
No other users are directly effected at the time, all other streams continue along (unless they also hit the same bug), resetting the connection will restore the to a functioning state.
As far as I can tell from debugging:
The Socket remains active (Until the server initiates an inactivity drop)
Only Server side receiving is ever effected
Increasing the amount of data sent does increase the likelihood of the issue occuring
No Exceptions are thrown, the stream still shows an ability to read/write
The stream can still successfully send to the client
Terminating the client's connection will send through the normal 0 length termination 'read' on the client
Does anyone know what would cause this? (also possibly worthy of note, the system also can be booted up with a straight TCP socket, and it runs flawlessly with that. The issue only occurs when using the SSL stream)

Related

Winsock IOCP Weird Behaviour On Disconnect Flood

I'm programming a Socket/Client/Server library for C#, since I do a lot of cross-platform programming, and I didn't find mono/dotnet/dotnet core enough efficient in high-performance socket handling.
Sice linux epoll unarguably won the performance and usability "fight", I decided to use an epoll-like interface as common API, so I'm trying to emulate it on Windows environment (windows socket performance is not that important for me than linux, but the interface is). To achieve this I use the Winsock2 and Kernel32 API's directly with Marshaling and I use IOCP.
Almost everything works fine, except one: When I create a TCP server with winsock, and connect to it (from local or from a remote machine via LAN, does not matter) with more than 10000 connections, all connections are accepted, no problem at all, when all connections send data (flood) from the client side to the server, no problem at all, server receives all packets, but when I disconnect all clients at the same time, the server does not recognize all disconnection events (i.e. 0 byte read/available), usually 1500-8000 clients stuck. The completion event does not get triggered, therefore I can not detect the connection loss.
The server does not crash, it continues accept new connections, and everything works as expected, only the lost connections do not get recognized.
I've read that - because using overlapped IO needs pre-allocated read buffer - IOCP on reading locks these buffers and releases the locks on completion, and if too many events happen in the same time it can not lock all affected buffers because of an OS limit, and this causes IOCP hang for indefinite time.
I've read that the solution to this buffer-lock problem is I should use a zero-sized buffer with null-pointer to the buffer itself, so the read event will not lock it, and I should use real buffer only when I read real data.
I've implemented the above workaround and it works, except the original problem, after disconnecting many-thousands of clients in the same time, a few-thousand stuck.
Of course I keep up the possibility my code is wrong, so I made a basic server with dotnet's built in SocketAsyncEventArgs class (as the official example describes), that basically does the same using IOCP, and the results are the same.
Everything works fine, except the thousands of client disconnecting in the same time, a few-thousand of disconnection (read on disconnect) events does not get recognized.
I know I should do IO operation and check the return value if the socket is still can perform the IO, and if not, then disconnect it. The problem is in some cases I have nothing to tell the socket, I just receive data, or if I do it periodically this would be almost the same as polling, and would cause high load with thousands of connections, wasted CPU work.
(I use closing the clients numerous closing methods, from gaceful disconnection to proper TCP Socket closing, both on windows and linux clients, results are always the same)
My questions:
Is there any known solution to this problem?
Is there any efficient way to recognize TCP (graceful) connection closing by remote?
Can I somehow set a read-timeout to overlapped socket read?
Any help would be appreciated, thank You!

c# named pipes detect server shutdown

In c# namedpipes have the horrible characteristics to get into loops of full cpu load during connecting (waiting for server), and in my case, also when the server is closing leaving the client alone. And this is my question: Can the client detect a server shutdown (server disconnect) by events?
Currently I run a timer, testing the connection every 2nd second. This works as I can close the connection in the client and dispose the namedpipeclientstram what stops the high cpu load. However it means in worst case there is very high cpu load for up to 2 seconds.
However to avoid any high cpu load at all I´m searching an kind of event system to detect one-sided connection shutdown of the server in the client. Is there any? I have not found anything so far.
It would be also ok to force the client stream not to seek reconnection when
server is closing (at least it looks like this is happening as it resembles very much the cpu state during a wait for the server)
The connection is bidirectional and async.
One more question about named pipes: When the server is up and ready, would it be ok to try to connect as client with a specified timeout of 0? Does this specify at least one trial, immediatly returning if server is not up?
Or does the timeout has be at least as high as network delays or server response times are ?
Thx for help
Stefan

Keeping track of a file transfer percentage

I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.

What happens to a socket on suspend/resume in windows

I have a c# .net4 application that listens on a socket using BeginReceiveFrom and EndRecieveFrom. All works as expected until I put the machine to sleep and then resume.
At that point EndReceieveFrom executes and throws an exception (Cannot access a disposed object). It appears that the socket is disposed when the machine is suspended but I'm not sure how to handle this.
Do I presume that all sockets have been disposed and recreate them all from scratch? I'm having problems tracking down the exact issue as remote debugging also breaks on suspend/resume.
What happens during suspend/resume very much depends on your hardware and networking setup. If your network card is not disabled during suspend, and the suspend is brief, open connections will survive suspend/resume without any problem (open TCP connections can time out on the other end of course).
However, if your network adapter is disabled during the sleep, or it is a USB adapter that gets disabled because it is connected to a disabled hub, or your computer gets a new IP address from DHCP, or your wireless adapter gets reconnected to a different access point, etc., then all current connections are going to be dropped, listening sockets wil no longer be valid, etc.
This is not specific to sleep/resume. Network interfaces can come up and go down at any time, and your code must handle it. You can easily simulate this with a USB network adapter, e.g. yank it out of your computer and your code must handle it.
I've had similar issues with suspend/resume and sockets (under .NET 4 and Windows 8, but I suspect not limited to these).
Specifically, I had a client socket application which only received data. Reading was done via BeginReceive with a call-back. Code in the call-back handled typical failure cases (e.g. remote server closes connection either gracefully or not).
When the client machine went to sleep (and this probably applies to the newer Windows 8 Fast Start mode too which is really just a kind of sleep/hibernate) the server would close the connection after a few seconds. When the client woke up however the async read call-back was not getting called (which I would expect to occur as it should get called when the socket has an error condition/is closed in addition to when there is data). I explicitly added code on a timer to the client to periodically check for this condition and recover, however even here (and using a combination of Poll, Available and Connected to check if the connection was up) the socket on the client side STILL appeared to be connected, so the recovery code never ran. I think if I had tried sending data then I would have received an error, but as I said this was strictly one-way.
The solution I ended up using was to detect the resume from sleep condition and close and re-establish my socket connections when this occurred. There are quite a few ways of detecting resume; in my case I was writing a Windows Service, so I could simply override the ServiceBase.OnPowerEvent method.

.net Tcp Server receives bytes in large clumps every few minutes

Background:
C# .net synchronous Tcp server
a TcpClient object is assigned by blocking on a TcpListener with the AcceptTcpClient method
once there's a TcpClient object, I pass it to a thread that invokes the client's GetStream method to create a NetworkStream
this NetworkStream is looped over, in each iteration doing a networkStream.Read(someBuffer, 0, 4096)
right now client and server are located on the same network, with no congestion to speak of
my server has plenty of memory to spare
if I load my server software onto another machine, the problem goes away
the kicker: traffic from a network Linux box gets through fine and on time
My server has been functioning just fine for several months. However, over the past weekend instead of receiving small groups of bytes in quick succession, the place where the process begins ( tcpListener.AcceptTcpClient() ) only occurs every couple of minutes. So my server sits idle, then gets 30-50 client requests all bundled into one huge block of bytes. Needless to say this causes a huge delay and put strain on my server. If the clump of client requests is big enough, it can take my server 30 minutes to catch up.
In logging built into my clients, I can see them do network writes, and flush between each one. So the clients are functioning correctly.
This reaks of some kind of system intervention. Is my Tcp server (as describe above) bad, or is something in Windows interfering with my traffic, and how can I tell?
Thanks guys.
You might want to install some packet capture software at each Tcp endpoint. You'd be surprised. I'm suffering with a similar problem now, almost completely identical actually.
When I put capture software in place I noted that traffic between endpoints was fast and on time as expected.

Categories

Resources