What happens to a socket on suspend/resume in windows - c#

I have a c# .net4 application that listens on a socket using BeginReceiveFrom and EndRecieveFrom. All works as expected until I put the machine to sleep and then resume.
At that point EndReceieveFrom executes and throws an exception (Cannot access a disposed object). It appears that the socket is disposed when the machine is suspended but I'm not sure how to handle this.
Do I presume that all sockets have been disposed and recreate them all from scratch? I'm having problems tracking down the exact issue as remote debugging also breaks on suspend/resume.

What happens during suspend/resume very much depends on your hardware and networking setup. If your network card is not disabled during suspend, and the suspend is brief, open connections will survive suspend/resume without any problem (open TCP connections can time out on the other end of course).
However, if your network adapter is disabled during the sleep, or it is a USB adapter that gets disabled because it is connected to a disabled hub, or your computer gets a new IP address from DHCP, or your wireless adapter gets reconnected to a different access point, etc., then all current connections are going to be dropped, listening sockets wil no longer be valid, etc.
This is not specific to sleep/resume. Network interfaces can come up and go down at any time, and your code must handle it. You can easily simulate this with a USB network adapter, e.g. yank it out of your computer and your code must handle it.

I've had similar issues with suspend/resume and sockets (under .NET 4 and Windows 8, but I suspect not limited to these).
Specifically, I had a client socket application which only received data. Reading was done via BeginReceive with a call-back. Code in the call-back handled typical failure cases (e.g. remote server closes connection either gracefully or not).
When the client machine went to sleep (and this probably applies to the newer Windows 8 Fast Start mode too which is really just a kind of sleep/hibernate) the server would close the connection after a few seconds. When the client woke up however the async read call-back was not getting called (which I would expect to occur as it should get called when the socket has an error condition/is closed in addition to when there is data). I explicitly added code on a timer to the client to periodically check for this condition and recover, however even here (and using a combination of Poll, Available and Connected to check if the connection was up) the socket on the client side STILL appeared to be connected, so the recovery code never ran. I think if I had tried sending data then I would have received an error, but as I said this was strictly one-way.
The solution I ended up using was to detect the resume from sleep condition and close and re-establish my socket connections when this occurred. There are quite a few ways of detecting resume; in my case I was writing a Windows Service, so I could simply override the ServiceBase.OnPowerEvent method.

Related

Winsock IOCP Weird Behaviour On Disconnect Flood

I'm programming a Socket/Client/Server library for C#, since I do a lot of cross-platform programming, and I didn't find mono/dotnet/dotnet core enough efficient in high-performance socket handling.
Sice linux epoll unarguably won the performance and usability "fight", I decided to use an epoll-like interface as common API, so I'm trying to emulate it on Windows environment (windows socket performance is not that important for me than linux, but the interface is). To achieve this I use the Winsock2 and Kernel32 API's directly with Marshaling and I use IOCP.
Almost everything works fine, except one: When I create a TCP server with winsock, and connect to it (from local or from a remote machine via LAN, does not matter) with more than 10000 connections, all connections are accepted, no problem at all, when all connections send data (flood) from the client side to the server, no problem at all, server receives all packets, but when I disconnect all clients at the same time, the server does not recognize all disconnection events (i.e. 0 byte read/available), usually 1500-8000 clients stuck. The completion event does not get triggered, therefore I can not detect the connection loss.
The server does not crash, it continues accept new connections, and everything works as expected, only the lost connections do not get recognized.
I've read that - because using overlapped IO needs pre-allocated read buffer - IOCP on reading locks these buffers and releases the locks on completion, and if too many events happen in the same time it can not lock all affected buffers because of an OS limit, and this causes IOCP hang for indefinite time.
I've read that the solution to this buffer-lock problem is I should use a zero-sized buffer with null-pointer to the buffer itself, so the read event will not lock it, and I should use real buffer only when I read real data.
I've implemented the above workaround and it works, except the original problem, after disconnecting many-thousands of clients in the same time, a few-thousand stuck.
Of course I keep up the possibility my code is wrong, so I made a basic server with dotnet's built in SocketAsyncEventArgs class (as the official example describes), that basically does the same using IOCP, and the results are the same.
Everything works fine, except the thousands of client disconnecting in the same time, a few-thousand of disconnection (read on disconnect) events does not get recognized.
I know I should do IO operation and check the return value if the socket is still can perform the IO, and if not, then disconnect it. The problem is in some cases I have nothing to tell the socket, I just receive data, or if I do it periodically this would be almost the same as polling, and would cause high load with thousands of connections, wasted CPU work.
(I use closing the clients numerous closing methods, from gaceful disconnection to proper TCP Socket closing, both on windows and linux clients, results are always the same)
My questions:
Is there any known solution to this problem?
Is there any efficient way to recognize TCP (graceful) connection closing by remote?
Can I somehow set a read-timeout to overlapped socket read?
Any help would be appreciated, thank You!

c# named pipes detect server shutdown

In c# namedpipes have the horrible characteristics to get into loops of full cpu load during connecting (waiting for server), and in my case, also when the server is closing leaving the client alone. And this is my question: Can the client detect a server shutdown (server disconnect) by events?
Currently I run a timer, testing the connection every 2nd second. This works as I can close the connection in the client and dispose the namedpipeclientstram what stops the high cpu load. However it means in worst case there is very high cpu load for up to 2 seconds.
However to avoid any high cpu load at all I´m searching an kind of event system to detect one-sided connection shutdown of the server in the client. Is there any? I have not found anything so far.
It would be also ok to force the client stream not to seek reconnection when
server is closing (at least it looks like this is happening as it resembles very much the cpu state during a wait for the server)
The connection is bidirectional and async.
One more question about named pipes: When the server is up and ready, would it be ok to try to connect as client with a specified timeout of 0? Does this specify at least one trial, immediatly returning if server is not up?
Or does the timeout has be at least as high as network delays or server response times are ?
Thx for help
Stefan

Simulate Socket Hard Disconnect

I have a C# app where a server and some clients communicate from different machines using sockets.
Most of the time, the server detects a dis-connect correctly when it receives 0 bytes in the sock.Receive(...) call. But when there is a hardware issue (say a network cable is unplugged), there is a problem. One server thread continues to block on sock.Receive(...) because it doesn't know the connection is lost. I was going to add a heartbeat message to detect this, but I wanted to test it in dev.
But I'm not sure how I can test this case without an actual hardware issue. Even when I just kill the client process, the socket somehow manages to dis-connect gracefully (that is, the server does a read of 0 bytes). It's only when I physically unplug the client machine from the network that I see this issue.
Is there any way that I can simulate this issue in dev?
You need to explicitly inform WinSock that you don't want it to clean up for you after closing the socket. This is done by setting LingerState as such:
socket.LingerState = new LingerOption(true, 0);
socket.Close();
LingerState is a bit confusing, because if you disable the linger, WinSock will actually linger but just not block your program. You have to enable the linger and set the linger timeout to zero in order to force WinSock to drop the connection.
P.S. If you want some info about keepalive packets (heartbeats), I've written a blog entry on the subject.
Update:
I re-read your question (and comment), and my answer is wrong...
The code above will simulate an abortive close, not a half-open situation. There isn't a way to simulate a half-open situation in software; you need to unplug an Ethernet cable that is not attached to either computer in order to test this (e.g., yank the cable between the two switches in this configuration: computer A <-> switch <-> switch <-> computer B).

IBM JMS connection

I'm currently working in C# and I need to check the state of the JMS connection that i made (whether it's connected / disconnected). I'm sure that I can connect and disconnect succesfully..its just that i need to display the status of the connection in my UI.
Is there any properties of the JMS connection that states connection status? Or is there any other method that i can use to check the connection status?
Thanks for your help. :)
Currently, I'm using the ExceptionListener to listen for any exceptions and a flag will be set to false when any exception is catched. And when I'm connecting, I will set the flag to true vice versa when i disconnect, I will set the flag to false.
This flag will be used by my UI to detect whether the connection is up or not.
However I was thinking that if theres any property / methods of the IBM connection which can be used to show the state of the connection as its a much better solution. For SonicMQ, theres .getConnectedState() which shows whether the connection is active or inactive. I was wondering whether if IBM have something similar to SonicMQ?
You can use the Connection.setExceptionListener() method to be notified asynchronously of exceptions detected in the connection. If a problem is detected the onException() method is called.
Be sure to set the FAILIFQUIESCE property on the factories and destinations so that your connection is notified and closed in an orderly fashion when the QMgr is shut down by the administrator.
In v7 of WMQ it is possible so enable session reconnection in the transport. In this case, the application may not be aware that the connection was interrupted but you can treat it as having been continuously connected.
Note that, for the most part, exceptions are driven by the application's API calls. So if the connection is lost, you may not know about it in real time but rather find out when an API call is made. If the application sits idle for long periods of time and you want a real-time display of connection status. Please see also "How to find out if JMS Connection is there?" for more on that topic.
WMQ v7 has options to reconnect the client automatically. You must be using v7 at both the client and the server for this to work. Since v6 is end-of-life as of Sept 2011, it is best if you develop this app on v7. You can download the v7 client as SupportPac MQC7. When JMS client reconnect is enabled, the application may not be aware of the connection activity except as a delay in responding to an API call while the connection is rebuilt. The length of that delay depends on channel tuning set by the administrator and in the connection factory.

Socket Connection Error

I am facing a wierd socket connection problem in .net windows application.I am using socket from .net to asynchonously communicate to a legacy intersystems cache database.I have a specific timeout value in the application, when the timeout occurs, user is prompted to stay connected to the application. When I say stay connected, socket is not being reset. I set timeout to 30 mins and say stay connected for first idle time.Then when I navigate the application it works fine.
If with out navigating in the application and say stay connected second time, and navigate in the app I am getting socket "host refused" connection error. This I can assume may be socket is terminated. But the wierd part is if I set the application timeout to 10 mins, then also I am getting socket error second time. When I check the sockets connected property, it is still true. I am not catching exception when I call sockets Send method. But the data is not passed from the socket.I have checked the other .net code. it is fine. This problem also occurs rarely, only 1 in 10 times. Any suggestions will be greatly helpful.
This sounds like a typical issue resulting from firewalls or other TCP settings.
Firewalls might silently disconnect the connection if it is idle more than x seconds.
As the TCP protocol does not generate an event in such a case (similar like just removing the network cable), it is highly recommended to send ping message every x seconds, so that the firewall stays open and that you can be sure to be connected. if the ping is missed, the server disconnects the client.

Categories

Resources