I know TCP won't break an established TCP connection during a period of time. But is it possible to increase this time?
I am developing a client-server application used to stream continuous data from one server to the client application using C#. I want to keep collecting data if the cable connection reconnected within one minute.
There are three timeouts that you can use in the tcp client:https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient?view=netcore-3.1#properties
Connect timeout. The time it takes to establish the connection
Receive timeout. The timeout on the receiving side once the reading is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.receivetimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_ReceiveTimeout
Send timeout. The timeout on the sender side once the sending is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.sendtimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_SendTimeout
Those are the out of the box settings you can use. If you didn't mean one of these, please clarify in your question.
Related
I've created a server-client communicate program in .NET (c# or vb.net) using TCPListener - Socket on port 8080. In simple words, the program work like a chat software, client connect to server, and both wait for message from each other and then process it.
To retrieve packet from client, i using are using a "While" method like this :
While true
Dim Buffer(4096) As Byte
s.Receive(Buffer)
Dim strDataReceived As String = System.Text.Encoding.ASCII.GetString(Buffer)
ProcessData(strDataReceived) 'Process data received...........
End while
When testing both server.exe-client.exe in local, the software work fine for several hours without any problem.
But when i start to run the server.exe in my real server, the connection between server-client usually become lost each other when client connected after a few dozen minutes. The symptom is client send packet to server but server does not receive the packet from client when server is still standing in 'sck.receive(Buffer)' command. I have tested many times but i still have no lucky to keep the connection run over 1 hour.
I have investigated about this problem but it still very strange :
The server did not installed any firewall software.
The client did not using any proxy and antivirus, firewall software
I using a bandwidth logging software on server to make sure the internet in my server is stable.
I make a 'ping -t' from my client computer to the server and keep looking on it to sure there are no connection lost between client and server . The ping command indicate that the ping time is usually range from 5ms to 50ms and no connection time out occur.
Even I try to unplug the network cable in the client computer for a few seconds, and then replug again to simulation the disconnect event. I've awesome that my connection between server-client is still maintain and it's not the problem that cause my symptom.
I was thinking to write a code for auto reconnect if received timeout. But it could make my software usually delay when reconnecting if the above symptom still there. I really want to know what wrong with my code and which is the solution for me to fix the above symptom?
Likely the server is behind some sort of firewall (Cisco ASA, etc.) which has idle connection timeouts. When you "punch" through a firewall / NAT device, a "session" is created within the firewall kernel. There is an associated resource that has to be reclaimed, so firewalls do not usually allow unlimited connection timeout, but firewalls do support things like dead connection detection.
Adding a keepalive packet / activity every 5 minutes, or disconnecting / reconnecting is the only way around that. Few network admins are going to change their configs to accomodate this. It is pretty simple to implement a "ping" or "keepalive" command in custom TCP application protocols. Just send the string and consume it, you don't even have to respond to the packet to accomplish resetting the idle timer within the firewall, though that would probably be best practice.
When I say keepalive, I don't mean the TCP keepalive socket option. That is a zero-length packet, and is detectable by a good firewall, like Cisco. Cisco admins can setup rules to quietly deny your keepalive packet, so the solution is to implement it above the TCP layer, in the Application layer, by sending a small string of data like "KEEPALIVE\r\n".
i have an web app. it opens a socket to a server. sends a message and waits for a response. The user may then perform another socket request to the server or may give it 5, 10, 15 mins(etc) and then send another message to the server. Or may close the web app.
Should i close the socket after each send/receive request or keep it open?
Thanks
You can close socket and make new connection if some addition delay (connect time is about round trip time (ping time)) is not a problem. If you will use SSL in the future it is better to keep session alive because the SSL connection establish much more difficult from the CPU resource point of view. Consider SO_KEEPALIVE socket option for permanent connections.
In this project the protocol is to:
Open Socket
Send Data
Wait for the acknowledgement message or timeout
If ack arrives in the proper window all is well. close the socket
If it times out, close the socket and start over up to N times.
I've noticed in the log that sometimes after the timeout we receive the ack anyway. Since the socket stays open for clean up and stragglers after the close I understand why.
But is there a better way to handle this? I'd like to be sure the connection is really down before reporting something to a line operator.
The timeout right now is an arbitrary value (2.5 seconds) tied to an external timer. It is not in the .Net TCP stack.
The TCP connection isn't really down unless the socket closes on your side. It takes minutes for TCP to decide the connection is down and close the socket if it doesn't receive any response from the network after sending data.
The sockets abstraction layers a bidirectional stream over the TCP channel. The user only sees that the stack has accepted the fragment when Write() (or equivalent) returns successfully, and when Read() returns a non-zero number of characters. The lower levels are opaque. For you to be certain that the server has received and acknowledged your data, you need confirm that the Read() returns the expected amount of data within your allowed time period.
As you have to connect a new session for each request, you have little choice but to tear down the session to make way for the next one. In particular, you can't leave a session around as the server may not allow multiple concurrent connections.
You state that the timeout is 2.5 seconds. If this is much smaller than the message interval, is there a problem if the timeout is extended to something close to the interval. This would appear to be more reliable that hammering away with multiple rapid requests of the same data.
I have a chat site (http://www.pitput.com) that connects user via socket connections.
I have in the client side a flash object that opens a connection to a port in my server.
In the server i have a service that is listening to that port in an async matter.
All is working fine except when i talk to someone after an unknown period of time(about couple of minutes) the server is closing my connection and i get an error in the server :
" A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond".
I dont know how exactly the tcp socket works. does it checking for "live" connection every couple of seconds? how does it decide when to close the connection? Im pretty sure that the close operation is not coming from the client side.
Thanks.
Sounds like the server is handling the connection but not responding. This is the point where I usually pull out WireShark to find out what's going on.
TCP/IP does have an option for checking for live connections; it's called "keepalive." Keepalives are hardly ever used. They're not enabled by default. They can be enabled on a system-wide basis by tweaking the Registry, but IIRC the lowest timeout is 1 hour. They can also be enabled on a single socket (with a timeout in minutes), but you would know if your application does that.
If you are using a web service and your client is connecting to an HTTP/HTTPS port, then it may be getting closed by the HTTP server (which usually close their connections after a couple minutes of idle time). It is also possible that an intermediate router may be closing it on your behalf after an amount of idle time (this is not default behavior, but corporate routers are sometimes configured with such "helpful" settings).
If you are using a Win32 service, then it does in fact sound like the client side is dropping the connection or losing their network (e.g., moving outside the range of a wireless router). In the latter case, it's possible that the client remains oblivious to the fact that the connection has been closed (this situation is called "half-open"); the server sees the close but the client thinks the connection is still there.
Is this an ASP web service hosted with some company? If so, the server generally recycles apps every 10 to 20 minutes. You cannot have a web service running indefinitely, unless it's your own server (I believe).
I am facing a wierd socket connection problem in .net windows application.I am using socket from .net to asynchonously communicate to a legacy intersystems cache database.I have a specific timeout value in the application, when the timeout occurs, user is prompted to stay connected to the application. When I say stay connected, socket is not being reset. I set timeout to 30 mins and say stay connected for first idle time.Then when I navigate the application it works fine.
If with out navigating in the application and say stay connected second time, and navigate in the app I am getting socket "host refused" connection error. This I can assume may be socket is terminated. But the wierd part is if I set the application timeout to 10 mins, then also I am getting socket error second time. When I check the sockets connected property, it is still true. I am not catching exception when I call sockets Send method. But the data is not passed from the socket.I have checked the other .net code. it is fine. This problem also occurs rarely, only 1 in 10 times. Any suggestions will be greatly helpful.
This sounds like a typical issue resulting from firewalls or other TCP settings.
Firewalls might silently disconnect the connection if it is idle more than x seconds.
As the TCP protocol does not generate an event in such a case (similar like just removing the network cable), it is highly recommended to send ping message every x seconds, so that the firewall stays open and that you can be sure to be connected. if the ping is missed, the server disconnects the client.