I am writing a network layer on top of TCP and I am facing some troubles during my UnitTest phase.
Here is what I'm doing (My library is composed of multiple classes but I only show you the native instructions causing my problems, to limit the size of the post):
private const int SERVER_PORT = 15000;
private const int CLIENT_PORT = 16000;
private const string LOCALHOST = "127.0.0.1";
private TcpClient Client { get; set; }
private TcpListener ServerListener { get; set; }
private TcpClient Server { get; set; }
[TestInitialize]
public void MyTestInitialize()
{
this.ServerListener = new TcpListener(new IPEndPoint(IPAddress.Parse(LOCALHOST), SERVER_PORT));
this.Client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
this.ServerListener.Start();
}
// In this method, I just try to connect to the server
[TestMethod]
public void TestConnect1()
{
var connectionRequest = this.ServerListener.AcceptTcpClientAsync();
this.Client.Connect(LOCALHOST, SERVER_PORT);
connectionRequest.Wait();
this.Server = connectionRequest.Result;
}
// In this method, I assume there is an applicative error within the client and it is disposed
[TestMethod]
public void TestConnect2()
{
var connectionRequest = this.ServerListener.AcceptTcpClientAsync();
this.Client.Connect(LOCALHOST, SERVER_PORT);
connectionRequest.Wait();
this.Server = connectionRequest.Result;
this.Client.Dispose();
}
[TestCleanup]
public void MyTestCleanup()
{
this.ServerListener?.Stop();
this.Server?.Dispose();
this.Client?.Dispose();
}
First of all, I HAVE TO dispose the server first if I want to connect earlier to the server on the same port from the same endpoint:
If you run my tests like this, it will run successfully the first time.
The second time, it will throw an exception, in both tests, on the Connect method, arguing the port is already in use.
The only way I found to avoid this exception (and to be able to connect on the same listener from the same endpoint) is to provoke a SocketException within the Server by sending bytes to the disposed client twice (on the first sending, there is no problem, the exception is thrown only on the second sending).
I don't even need to Dispose the Server if I provoke an Exception ...
Why is the Server.Dispose() not closing the connection and freeing the port ??? Is there a better way to freeing the port than by provoking an Exception ?
Thanks in advance.
(Sorry for my English, I am not a native speaker)
Here is an example within a main fonction, to be checkout more easily:
private const int SERVER_PORT = 15000;
private const int CLIENT_PORT = 16000;
private const string LOCALHOST = "127.0.0.1";
static void Main(string[] args)
{
var serverListener = new TcpListener(new IPEndPoint(IPAddress.Parse(LOCALHOST), SERVER_PORT));
var client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
serverListener.Start();
var connectionRequest = client.ConnectAsync(LOCALHOST, SERVER_PORT);
var server = serverListener.AcceptTcpClient();
connectionRequest.Wait();
// Oops, something wrong append (wrong password for exemple), the client has to be disposed (I really want this behavior)
client.Dispose();
// Uncomment this to see the magic happens
//try
//{
//server.Client.Send(Encoding.ASCII.GetBytes("no problem"));
//server.Client.Send(Encoding.ASCII.GetBytes("oops looks like the client is disconnected"));
//}
//catch (Exception)
//{ }
// Lets try again, with a new password for example (as I said, I really want to close the connection in the first place, and I need to keep the same client EndPoint !)
client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
connectionRequest = client.ConnectAsync(LOCALHOST, SERVER_PORT);
// If the previous try/catch is commented, you will stay stuck here,
// because the ConnectAsync has thrown an exception that will be raised only during the Wait() instruction
server = serverListener.AcceptTcpClient();
connectionRequest.Wait();
Console.WriteLine("press a key");
Console.ReadKey();
}
You may need to restart Visual Studio (or wait some time) if you trigger the bug and the program refuse to let you connect.
Your port is already in use. Run netstat and see. You'll find ports still open in the TIME_WAIT state.
Because you have not gracefully closed the sockets, the network layer must keep these ports open, in case the remote endpoint sends more data. Were it to do otherwise, the sockets could receive spurious data meant for something else, corrupting the data stream.
The right way to fix this is to close the connections gracefully (i.e. use the Socket.Shutdown() method). If you want to include a test involving the remote endpoint crashing, then you'll need to handle that scenario correctly as well. For one, you should set up an independent remote process that you can actually crash. For another, your server should correctly accommodate the situation by not trying to use the port again until an appropriate time has passed (i.e. the port is actually closed and is no longer in TIME_WAIT).
On that latter point, you may want to consider actually using the work-around you've discovered: TIME_WAIT involves the scenario where the status of the remote endpoint is unknown. If you send data, the network layer can detect the failed connection and effect the socket cleanup earlier.
For additional insights, see e.g.:
Port Stuck in Time_Wait
Reconnect to the server
How can I forcibly close a TcpListener
How do I prevent Socket/Port Exhaustion?
(But do not use the recommendation found among the answers to use SO_REUSEADDR/SocketOptionName.ReuseAddress…all that does is hide the problem, and can result in corrupted data in real-world code.)
Related
I'm new to the world of ZeroMQ and I'm working through the documentation of both NetMQ and ZeroMQ as I go. I'm currently implementing (or preparing to implement) the Paranoid Pirate Pattern, and hit a snag. I have a single app which is running the server(s), clients, and eventually queue, though I haven't implemented the queue yet. Right now, there should only be one server at a time running. I can launch as many clients as I like, all communicating with the single server. I am able to have my server "crash" and restart it (manually for now, automatically soon). That all works. Or at least, restarting the server works once.
To enforce that there's only a single server running, I have a thread (which I'll call the WatchThread) which opens a response socket that binds to an address and polls for messages. When the server dies, it signals its demise and the WatchThread decrements the count when it receives the signal. Here's the code snippet that is failing:
//This is the server's main loop:
public void Start(object? count)
{
num = (int)(count ?? -1);
_model.WriteMessage($"Server {num} up");
var rng = new Random();
using ResponseSocket server = new();
server.Bind(tcpLocalhost); //This is for talking to the clients
int cycles = 0;
while (true)
{
var message = server.ReceiveFrameString();
if (message == "Kill")
{
server.SendFrame("Dying");
return;
}
if (cycles++ > 3 && rng.Next(0, 16) == 0)
{
_model.WriteMessage($"Server {num} \"Crashing\"");
RequestSocket sock = new(); //This is for talking to the WatchThread
sock.Connect(WatchThreadString);
sock.SendFrame("Dying"); //This isn't working correctly
return;
}
if(cycles > 3 && rng.Next(0, 10) == 0)
{
_model.WriteMessage($"Server {num}: Slowdown");
Thread.Sleep(1000);
}
server.SendFrame($"Server{num}: {message}");
}
}
And here's the WatchThread code:
public const string WatchThreadString = "tcp://localhost:5000";
private void WatchServers()
{
_watchThread = new ResponseSocket(WatchThreadString);
_watchThread.ReceiveReady += OnWatchThreadOnReceiveReady;
while (_listen)
{
bool result = _watchThread.Poll(TimeSpan.FromMilliseconds(1000));
}
}
private void OnWatchThreadOnReceiveReady(object? s, NetMQSocketEventArgs a)
{
lock (_countLock)
{
ServerCount--;
}
_watchThread.ReceiveFrameBytes();
}
As you can see, it's pretty straight forward. What am I missing? It seems like what should happen is exactly what happens the first time everything is instantiated: The server is supposed to go down, so it opens a new socket to the pre-existing WatchThread and sends a frame. The WatchThread receives the message and decrements the counter appropriately. It's only on the second server where things don't behave as expected...
Edit: I was able to get it to work by unbinding/closing _watchThread and recreating it... it's definitely suboptimal and it still seems like I'm missing something. It's almost as if for some reason I can only use that socket once, though I have other request sockets being used multiple times.
Additional Edit:
My netstat output with 6 clients running (kubernetes is in my host file as 127.0.0.1 as is detailed here):
TCP 127.0.0.1:5555 MyComputerName:0 LISTENING
TCP 127.0.0.1:5555 kubernetes:64243 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64261 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64264 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64269 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64272 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64273 ESTABLISHED
I believe the shutdown sequence is as follows (as described here):
The MSDN documentation (remarks section) reads:
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
This seems to imply that if I use Shutdown(SocketShutdown.Both), any data that has not yet been received, may still be consumed. To test this:
I continuously send data to the client (via Send in a separate thread).
The client executed Shutdown(SocketShutdown.Both).
The BeginReceive callback on the server executes, however, EndReceive throws an exception: An existing connection was forcibly closed by the remote host. This means that I am unable to receive the 0 return value and in turn call Shutdown.
As requested, I've posted the Server side code below (it's wrapped in a Windows Form and it was created just as an experiment). In my test scenario I did not see the CLOSE_WAIT state in TCPView as I normally did without sending the continuous data. So potentially I've done something wrong and I'm interrupting the consequences incorrectly. In another experiment:
Client connects to server.
Client executes Shutdown(SocketShutdown.Both).
Server receives shutdown acknowledgement and sends some data in response. Server also executes Shutdown.
Client receives data from server but the next BeginReceive is not allowed: A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call
In this scenario, I was still expecting a 0 return value from EndReceive to Close the socket. Does this mean that I should use Shutdown(SocketShutdown.Send) instead? If so, when should one use Shutdown(SocketShutdown.Both)?
Code from first experiment:
private TcpListener SocketListener { get; set; }
private Socket ConnectedClient { get; set; }
private bool serverShutdownRequested;
private object shutdownLock = new object();
private struct SocketState
{
public Socket socket;
public byte[] bytes;
}
private void ProcessIncoming(IAsyncResult ar)
{
var state = (SocketState)ar.AsyncState;
// Exception thrown here when client executes Shutdown:
var dataRead = state.socket.EndReceive(ar);
if (dataRead > 0)
{
state.socket.BeginReceive(state.bytes, 0, state.bytes.Length, SocketFlags.None, ProcessIncoming, state);
}
else
{
lock (shutdownLock)
{
serverShutdownRequested = true;
state.socket.Shutdown(SocketShutdown.Both);
state.socket.Close();
state.socket.Dispose();
}
}
}
private void Spam()
{
int i = 0;
while (true)
{
lock (shutdownLock)
{
if (!serverShutdownRequested)
{
try { ConnectedClient.Send(Encoding.Default.GetBytes(i.ToString())); }
catch { break; }
++i;
}
else { break; }
}
}
}
private void Listen()
{
while (true)
{
ConnectedClient = SocketListener.AcceptSocket();
var data = new SocketState();
data.bytes = new byte[1024];
data.socket = ConnectedClient;
ConnectedClient.BeginReceive(data.bytes, 0, data.bytes.Length, SocketFlags.None, ProcessIncoming, data);
serverShutdownRequested = false;
new Thread(Spam).Start();
}
}
public ServerForm()
{
InitializeComponent();
var hostEntry = Dns.GetHostEntry("localhost");
var endPoint = new IPEndPoint(hostEntry.AddressList[0], 11000);
SocketListener = new TcpListener(endPoint);
SocketListener.Start();
new Thread(Listen).Start();
}
Shutdown(SocketShutdown.Both) disables both the send and receive operations on the current socket. Calling Shutdown(SocketShutdown.Both) is an actual disconnection of your client from the server. You can see this by checking the socket Connected property in your SocketState object on the server side: it will be false.
This happens because the Shutdown operation is not reversible, so after stopping both send and receive on the socket, there's no point in keeping it connected as it is isolated.
"Once the shutdown function is called to disable send, receive, or both, there is no method to re-enable send or receive for the existing socket connection."
(https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown)
As for your question:
I continuously send data to the client (via Send in a separate thread).
The client executed Shutdown(SocketShutdown.Both). --> this disconnects the client
The BeginReceive callback on the server executes, however, EndReceive throws an
exception: An existing connection was forcibly closed by the remote host. This means that
I am unable to receive the 0 return value and in turn call Shutdown.
EndReceive throws an exception because the client socket is not connected anymore.
To gracefully terminate the socket:
the client socket calls Shutdown(SocketShutdown.Send)) but should keep receiving
on the server, EndReceive returns 0 bytes read (the client signals there is no more data from its side)
the server
A) sends its last data
B) calls Shutdown(SocketShutdown.Send))
C) calls Close on the socket, optionally with a timeout to allow the data to be read from the client
the client
A) reads the remaining data from the server and then receives 0 bytes (the server signals there is no more data from its side)
B) calls Close on the socket
(https://learn.microsoft.com/it-it/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2?redirectedfrom=MSDN)
Shutdown(SocketShutdown.Both) should be used when you don't want to receive or send. You either want to abruptly close connection or you know that other party has shutdown using SocketShutdown.Receive. For example, you have a time server that sends current time to the client that connects it, server sends time and calls Shutdown(SocketShutdown.Received) as it is not expecting any more data from client. The client upon receiving time data should call Shutdown(SocketShutdown.Both) as it is not going to send or receive any further data.
How can I make the server and the client to run unlimitedly and be able to exchange data(meaning, until the application is closed), instead of run for one exchange of information only.
Tried with while(true) but maybe didn't put it on the right place and then I can't really reach the methods for closing and stopping the socket and the listener.
Here's some of the code of the server:
public static void StartServer()
{
try
{
IPAddress ip = IPAddress.Parse("192.168.1.11");
TcpListener myListener = new TcpListener(ip, 8000);
myListener.Start();
Socket s = myListener.AcceptSocket();
byte[] b = new byte[100];
int k = s.Receive(b);
... some other actions ...
s.Close();
myListener.Stop();
}
and then then Main() where I invoke it.
With the Client is the same story.
You can create an infinite loop which contains the Receive function processing data, and returns to receive. That way the server always excepts data from the client until server, or client terminates.
while(true)
{
byte[] buffer = new byte[100];
s.Receive(buffer);
//Do something with data...
}
Beware through because in your current design only one client is supported. If you want to support multiple clients consider using threads.
I'm developing a Windows Phone application that will connect to my server. It does this by using ConnectAsync when you push the login button. But if the server is down and you want to cancel the connecting attempt, what to do?
Here is is the current client code complete with my latest try at shutting the socket connection down. It is to be assumed that you can easily implement a timeout once you know how to turn the connection off.
private IPAddress ServerAddress = new IPAddress(0xff00ff00); //Censored my IP
private int ServerPort = 13000;
private Socket CurrentSocket;
private SocketAsyncEventArgs CurrentSocketEventArgs;
private bool Connecting = false;
private void Button_Click(object sender, RoutedEventArgs e)
{
try
{
if (Connecting)
{
CurrentSocket.Close();
CurrentSocket.Dispose();
CurrentSocketEventArgs.Dispose();
CurrentSocket = null;
CurrentSocketEventArgs = null;
}
UserData userdata = new UserData();
userdata.Username = usernameBox.Text;
userdata.Password = passwordBox.Password;
Connecting = ConnectToServer(userdata);
}
catch (Exception exception)
{
Dispatcher.BeginInvoke(() => MessageBox.Show("Error: " + exception.Message));
}
}
private bool ConnectToServer(UserData userdata)
{
CurrentSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
//Create a new SocketAsyncEventArgs
CurrentSocketEventArgs = new SocketAsyncEventArgs();
CurrentSocketEventArgs.RemoteEndPoint = new IPEndPoint(ServerAddress, ServerPort);
CurrentSocketEventArgs.Completed += ConnectionCompleted;
CurrentSocketEventArgs.UserToken = userdata;
CurrentSocketEventArgs.SetBuffer(new byte[1024], 0, 1024);
CurrentSocket.ConnectAsync(CurrentSocketEventArgs);
return true;
}
Edit: A thought that struck me is that perhaps it's the server computer that stacks up on requests even though the server software isn't on? Is that possible?
I believe
socket.Close()
should cancel the async connection attempt. There may be some exceptions that need to be caught as a consequence.
Your code looks OK.
As already said by Marc, closing the socket cancels all pending operations.
Yes, it's sometimes possible that you connect OK and nothing happens. To verify, in the command line
telnet 192.168.1.44 31337 where 192.168.1.44 is ServerAddress (name is OK as well) and 31337 is ServerPort. You might first enable a "Telnet client" using Control Panel/Programs and Features/Turn Windows features on and off. If you see "Could not open connection" = your WinForms application shouldn't be able to connect. If you see a black screen with blinking cursor = your WinForms application should connect OK.
What's going on here is that you are specifying a buffer in the argument to ConnectAsync.
CurrentSocketEventArgs.SetBuffer(new byte[1024], 0, 1024);
The documentation says:
Optionally, a buffer may be provided which will atomically be sent on the socket after the ConnectAsync method succeeds.
So your server is going to see the connection and data at once. Your cancellation code is just fine, it's just that the data is sent before you get a chance to cancel anything.
I have created simple tcp server - it works pretty well.
the problems starts when we switch to the stress tests -since our server should handle many concurrent open sockets - we have created a stress test to check this.
unfortunately, looks like the server is choking and can not respond to new connection request in timely fashion when the number of the concurrent open sockets are around 100.
we already tried few types of server - and all produce the same behavior.
the server: can be something like the samples in this post(all produce the same behavior)
How to write a scalable Tcp/Ip based server
here is the code that we are using - when a client connects - the server will just hang in order to keep the socket alive.
enter code here
public class Server
{
private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 2060);
public Server()
{
listener.Start();
Console.WriteLine("Started.");
while (true)
{
Console.WriteLine("Waiting for connection...");
var client = listener.AcceptTcpClient();
Console.WriteLine("Connected!");
// each connection has its own thread
new Thread(ServeData).Start(client);
}
}
private static void ServeData(object clientSocket)
{
Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId);
var rnd = new Random();
try
{
var client = (TcpClient)clientSocket;
var stream = client.GetStream();
byte[] arr = new byte[1024];
stream.Read(arr, 0, 1024);
Thread.Sleep(int.MaxValue);
}
catch (SocketException e)
{
Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e);
}
}
}
the stress test client: is a simple tcp client, that loop and open sokets, one after the other
class Program
{
static List<Socket> sockets;
static private void go(){
Socket newsock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
IPEndPoint iep = new IPEndPoint(IPAddress.Parse("11.11.11.11"), 2060);
try
{
newsock.Connect(iep);
}
catch (SocketException ex)
{
Console.WriteLine(ex.Message );
}
lock (sockets)
{
sockets.Add(newsock);
}
}
static void Main(string[] args)
{
sockets = new List<Socket>();
//int start = 1;// Int32.Parse(Console.ReadLine());
for (int i = 1; i < 1000; i++)
{
go();
Thread.Sleep(200);
}
Console.WriteLine("press a key");
Console.ReadKey();
}
}
}
is there an easy way to explain this behavior? maybe c++ implementation if the TCP server will produce better results? maybe it is actually a client side problem?
Any comment will be welcomed !
ofer
Specify a huge listener backlog: http://msdn.microsoft.com/en-us/library/5kh8wf6s.aspx
Firstly a thread per connection design is unlikely to be especially scalable, you would do better to base your design on an asynchronous server model which uses IO Completion Ports under the hood. This, however, is unlikely to be the problem in this case as you're not really stressing the server that much.
Secondly the listen backlog is a red herring here. The listen backlog is used to provide a queue for connections that are waiting to be accepted. In this example your client uses a synchronous connect call which means that the client will never have more than 1 connect attempt outstanding at any one time. If you were using asynchronous connection attempts in the client then you would be right to look at tuning the listen backlog, perhaps.
Thirdly, given that the client code doesn't show that it sends any data, you can simply issue the read calls and remove the sleep that follows it, the read calls will block. The sleep just confuses matters.
Are you running the client and the server on the same machine?
Is this ALL the code in both client and server?
You might try and eliminate the client from the problem space by using my free TCP test client which is available here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
Likewise, you could test your test client against one of my simple free servers, like this one: http://www.lenholgate.com/blog/2005/11/simple-echo-servers.html
I can't see anything obviously wrong with the code (apart from the overall design).