NetMQ response socket poll fails after succeeding one time - c#

I'm new to the world of ZeroMQ and I'm working through the documentation of both NetMQ and ZeroMQ as I go. I'm currently implementing (or preparing to implement) the Paranoid Pirate Pattern, and hit a snag. I have a single app which is running the server(s), clients, and eventually queue, though I haven't implemented the queue yet. Right now, there should only be one server at a time running. I can launch as many clients as I like, all communicating with the single server. I am able to have my server "crash" and restart it (manually for now, automatically soon). That all works. Or at least, restarting the server works once.
To enforce that there's only a single server running, I have a thread (which I'll call the WatchThread) which opens a response socket that binds to an address and polls for messages. When the server dies, it signals its demise and the WatchThread decrements the count when it receives the signal. Here's the code snippet that is failing:
//This is the server's main loop:
public void Start(object? count)
{
num = (int)(count ?? -1);
_model.WriteMessage($"Server {num} up");
var rng = new Random();
using ResponseSocket server = new();
server.Bind(tcpLocalhost); //This is for talking to the clients
int cycles = 0;
while (true)
{
var message = server.ReceiveFrameString();
if (message == "Kill")
{
server.SendFrame("Dying");
return;
}
if (cycles++ > 3 && rng.Next(0, 16) == 0)
{
_model.WriteMessage($"Server {num} \"Crashing\"");
RequestSocket sock = new(); //This is for talking to the WatchThread
sock.Connect(WatchThreadString);
sock.SendFrame("Dying"); //This isn't working correctly
return;
}
if(cycles > 3 && rng.Next(0, 10) == 0)
{
_model.WriteMessage($"Server {num}: Slowdown");
Thread.Sleep(1000);
}
server.SendFrame($"Server{num}: {message}");
}
}
And here's the WatchThread code:
public const string WatchThreadString = "tcp://localhost:5000";
private void WatchServers()
{
_watchThread = new ResponseSocket(WatchThreadString);
_watchThread.ReceiveReady += OnWatchThreadOnReceiveReady;
while (_listen)
{
bool result = _watchThread.Poll(TimeSpan.FromMilliseconds(1000));
}
}
private void OnWatchThreadOnReceiveReady(object? s, NetMQSocketEventArgs a)
{
lock (_countLock)
{
ServerCount--;
}
_watchThread.ReceiveFrameBytes();
}
As you can see, it's pretty straight forward. What am I missing? It seems like what should happen is exactly what happens the first time everything is instantiated: The server is supposed to go down, so it opens a new socket to the pre-existing WatchThread and sends a frame. The WatchThread receives the message and decrements the counter appropriately. It's only on the second server where things don't behave as expected...
Edit: I was able to get it to work by unbinding/closing _watchThread and recreating it... it's definitely suboptimal and it still seems like I'm missing something. It's almost as if for some reason I can only use that socket once, though I have other request sockets being used multiple times.
Additional Edit:
My netstat output with 6 clients running (kubernetes is in my host file as 127.0.0.1 as is detailed here):
TCP 127.0.0.1:5555 MyComputerName:0 LISTENING
TCP 127.0.0.1:5555 kubernetes:64243 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64261 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64264 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64269 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64272 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64273 ESTABLISHED

Related

Multiple Client Sockets from one instance of a program connecting different devices- working really slow

I have built application that connects with help of TCP Sockets to 4 devices.
For that i created an TCP class with asynchronous methods to send and receive data.
public delegate void dataRec(string recStr);
public event dataRec dataReceiveEvent;
public Socket socket;
public void Connect(string IpAdress, int portNum)
{
socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IPEndPoint epServer = new IPEndPoint(IPAddress.Parse(IpAdress), portNum);
socket.Blocking = false;
AsyncCallback onconnect = new AsyncCallback(OnConnect);
m_sock.BeginConnect(epServer, onconnect, socket);
}
public void SetupRecieveCallback(Socket sock)
{
try
{
AsyncCallback recieveData = new AsyncCallback(OnRecievedData);
sock.BeginReceive(m_byBuff, 0, m_byBuff.Length, SocketFlags.None, recieveData, sock);
}
catch (Exception ex)
{
//nevermind
}
}
public void OnRecievedData(IAsyncResult ar)
{
// Socket was the passed in object
Socket sock = (Socket)ar.AsyncState;
try
{
int nBytesRec = sock.EndReceive(ar);
if (nBytesRec > 0)
{
string sRecieved = Encoding.ASCII.GetString(m_byBuff, 0, nBytesRec);
OnAddMessage(sRecieved);
SetupRecieveCallback(sock);
}
else
{
sock.Shutdown(SocketShutdown.Both);
sock.Close();
}
}
catch (Exception ex)
{
//nevermind
}
}
public void OnAddMessage(string sMessage)
{
if (mainProgram.InvokeRequired)
{
{
scanEventCallback d = new scanEventCallback(OnAddMessage);
mainProgram.BeginInvoke(d, sMessage);
}
}
else
{
dataReceiveEvent(sMessage);
}
}
I have 4 devices with 4 different IP's and Port's that i send data, and from which i receive data.
So i created 4 different instances of a class mentioned.
When i receive data i call callback functions to do their job with the data i received (OnAddMessage event).
The connection with devices is really good, latency is like: 1-2ms~ (its in internal network).
Functions i call by callbacks are preety fast, each function is not more than 100ms.
The problem is it is working really slow, and its not caused by callback functions.
For each data i send to device, i receive one message from it.
When i start sending them, and stop after like 1 minute of working, the program keep receiving data for like 4-5 sec, even when i turn off devices- its like some kind of lag, that i receive data, that should be delivered a lot earlier.
It looks like something is working really slow.
Im getting like 1 message per second from each device, so it shouldnt be a big deal.
Any ideas what else i should do or set, or what actually could slow me down ?
You haven't posted all the relevant code, but here are some things to pay attention to:
With a network sniffer, like Wireshark or tcpdump, you can see what is actually going on.
Latency it not the only relevant factor for "connection speed". Look also at throughput, packet loss, re-transmissions, etc..
Try to send and receive in large chunks. Sending and receive only single bytes is slow because it has a lot of overhead.
The receiver should read data faster than the sender can send it, or else internal buffers (OS, network) will fill up.
Try to avoid a "chatty" protocol, basically synchronous request/reply, if possible.
If you have a chatty protocol, you can get better performance by disabling the Nagle algorithm. The option to disable this algorithm is often called "TCP no delay" or similar.
Don't close/reopen the connection for each message. TCP connection setup and teardown has quite some overhead.
If you have long standing open TCP connections, close the connection when the connection is idle for some time, for example several minutes.

TcpClient dispose order

I am writing a network layer on top of TCP and I am facing some troubles during my UnitTest phase.
Here is what I'm doing (My library is composed of multiple classes but I only show you the native instructions causing my problems, to limit the size of the post):
private const int SERVER_PORT = 15000;
private const int CLIENT_PORT = 16000;
private const string LOCALHOST = "127.0.0.1";
private TcpClient Client { get; set; }
private TcpListener ServerListener { get; set; }
private TcpClient Server { get; set; }
[TestInitialize]
public void MyTestInitialize()
{
this.ServerListener = new TcpListener(new IPEndPoint(IPAddress.Parse(LOCALHOST), SERVER_PORT));
this.Client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
this.ServerListener.Start();
}
// In this method, I just try to connect to the server
[TestMethod]
public void TestConnect1()
{
var connectionRequest = this.ServerListener.AcceptTcpClientAsync();
this.Client.Connect(LOCALHOST, SERVER_PORT);
connectionRequest.Wait();
this.Server = connectionRequest.Result;
}
// In this method, I assume there is an applicative error within the client and it is disposed
[TestMethod]
public void TestConnect2()
{
var connectionRequest = this.ServerListener.AcceptTcpClientAsync();
this.Client.Connect(LOCALHOST, SERVER_PORT);
connectionRequest.Wait();
this.Server = connectionRequest.Result;
this.Client.Dispose();
}
[TestCleanup]
public void MyTestCleanup()
{
this.ServerListener?.Stop();
this.Server?.Dispose();
this.Client?.Dispose();
}
First of all, I HAVE TO dispose the server first if I want to connect earlier to the server on the same port from the same endpoint:
If you run my tests like this, it will run successfully the first time.
The second time, it will throw an exception, in both tests, on the Connect method, arguing the port is already in use.
The only way I found to avoid this exception (and to be able to connect on the same listener from the same endpoint) is to provoke a SocketException within the Server by sending bytes to the disposed client twice (on the first sending, there is no problem, the exception is thrown only on the second sending).
I don't even need to Dispose the Server if I provoke an Exception ...
Why is the Server.Dispose() not closing the connection and freeing the port ??? Is there a better way to freeing the port than by provoking an Exception ?
Thanks in advance.
(Sorry for my English, I am not a native speaker)
Here is an example within a main fonction, to be checkout more easily:
private const int SERVER_PORT = 15000;
private const int CLIENT_PORT = 16000;
private const string LOCALHOST = "127.0.0.1";
static void Main(string[] args)
{
var serverListener = new TcpListener(new IPEndPoint(IPAddress.Parse(LOCALHOST), SERVER_PORT));
var client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
serverListener.Start();
var connectionRequest = client.ConnectAsync(LOCALHOST, SERVER_PORT);
var server = serverListener.AcceptTcpClient();
connectionRequest.Wait();
// Oops, something wrong append (wrong password for exemple), the client has to be disposed (I really want this behavior)
client.Dispose();
// Uncomment this to see the magic happens
//try
//{
//server.Client.Send(Encoding.ASCII.GetBytes("no problem"));
//server.Client.Send(Encoding.ASCII.GetBytes("oops looks like the client is disconnected"));
//}
//catch (Exception)
//{ }
// Lets try again, with a new password for example (as I said, I really want to close the connection in the first place, and I need to keep the same client EndPoint !)
client = new TcpClient(new IPEndPoint(IPAddress.Parse(LOCALHOST), CLIENT_PORT));
connectionRequest = client.ConnectAsync(LOCALHOST, SERVER_PORT);
// If the previous try/catch is commented, you will stay stuck here,
// because the ConnectAsync has thrown an exception that will be raised only during the Wait() instruction
server = serverListener.AcceptTcpClient();
connectionRequest.Wait();
Console.WriteLine("press a key");
Console.ReadKey();
}
You may need to restart Visual Studio (or wait some time) if you trigger the bug and the program refuse to let you connect.
Your port is already in use. Run netstat and see. You'll find ports still open in the TIME_WAIT state.
Because you have not gracefully closed the sockets, the network layer must keep these ports open, in case the remote endpoint sends more data. Were it to do otherwise, the sockets could receive spurious data meant for something else, corrupting the data stream.
The right way to fix this is to close the connections gracefully (i.e. use the Socket.Shutdown() method). If you want to include a test involving the remote endpoint crashing, then you'll need to handle that scenario correctly as well. For one, you should set up an independent remote process that you can actually crash. For another, your server should correctly accommodate the situation by not trying to use the port again until an appropriate time has passed (i.e. the port is actually closed and is no longer in TIME_WAIT).
On that latter point, you may want to consider actually using the work-around you've discovered: TIME_WAIT involves the scenario where the status of the remote endpoint is unknown. If you send data, the network layer can detect the failed connection and effect the socket cleanup earlier.
For additional insights, see e.g.:
Port Stuck in Time_Wait
Reconnect to the server
How can I forcibly close a TcpListener
How do I prevent Socket/Port Exhaustion?
(But do not use the recommendation found among the answers to use SO_REUSEADDR/SocketOptionName.ReuseAddress…all that does is hide the problem, and can result in corrupted data in real-world code.)

C# sockets How can I handle unplanned disconnections on socket servers?

I am building a socket server in C#, and i have an issue with unplanned sockets disconnects.
For now, i am handling this like that:
private List<TCPClient> _acceptedClientsList = new List<TCPClient>();
public void checkIfStillConnected()
{
while (true)
{
lock (_sync)
{
for (int i = 0; i < _acceptedClientsList.Count(); ++i)
{
if (_acceptedClientsList[i].Client.Client.Poll(10, SelectMode.SelectRead) == true && (_acceptedClientsList[i]).Client.Client.Available == 0) // 5000 is the time to wait for a response, in microseconds.
{
NHDatabaseHandler dbHandler = new NHDatabaseHandler();
dbHandler.Init();
if (_acceptedClientsList[i].isVoipClient == true)
{
dbHandler.UpdateUserStatus(_acceptedClientsList[i].UserName, EStatus.Offline);
}
RemoveClient(i); // this function removes the selected client at i from the acceptedClientsList
i--;
}
}
}
}
}
But this is not optimized at all. This is a thread who checks every sockets status with the socket.poll, every time, infinitely...
I don't know if it's the best thing to do... It looks like it's heavy and weird.
Does anyone has an idea?
Thanks
It is best to have one dedicated thread to read incoming data from each socket. When unplanned socket disconnection occurs (for e.g. the client side network drops out or crashes), the socket will throw an exception. You can then catch exception, then close the socket connection and end the thread cleanly while keeping other socket running.
I learnt the TCP/IP communication from this tutorial.
http://www.codeproject.com/Articles/155282/TCP-Server-Client-Communication-Implementation
The author explains his implementation very well. It's worth learning from. You can reference it to rewrite your own library.

getting 10060 (Connection Timed Out) when stress testing simple tcp server

I have created simple tcp server - it works pretty well.
the problems starts when we switch to the stress tests -since our server should handle many concurrent open sockets - we have created a stress test to check this.
unfortunately, looks like the server is choking and can not respond to new connection request in timely fashion when the number of the concurrent open sockets are around 100.
we already tried few types of server - and all produce the same behavior.
the server: can be something like the samples in this post(all produce the same behavior)
How to write a scalable Tcp/Ip based server
here is the code that we are using - when a client connects - the server will just hang in order to keep the socket alive.
enter code here
public class Server
{
private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 2060);
public Server()
{
listener.Start();
Console.WriteLine("Started.");
while (true)
{
Console.WriteLine("Waiting for connection...");
var client = listener.AcceptTcpClient();
Console.WriteLine("Connected!");
// each connection has its own thread
new Thread(ServeData).Start(client);
}
}
private static void ServeData(object clientSocket)
{
Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId);
var rnd = new Random();
try
{
var client = (TcpClient)clientSocket;
var stream = client.GetStream();
byte[] arr = new byte[1024];
stream.Read(arr, 0, 1024);
Thread.Sleep(int.MaxValue);
}
catch (SocketException e)
{
Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e);
}
}
}
the stress test client: is a simple tcp client, that loop and open sokets, one after the other
class Program
{
static List<Socket> sockets;
static private void go(){
Socket newsock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
IPEndPoint iep = new IPEndPoint(IPAddress.Parse("11.11.11.11"), 2060);
try
{
newsock.Connect(iep);
}
catch (SocketException ex)
{
Console.WriteLine(ex.Message );
}
lock (sockets)
{
sockets.Add(newsock);
}
}
static void Main(string[] args)
{
sockets = new List<Socket>();
//int start = 1;// Int32.Parse(Console.ReadLine());
for (int i = 1; i < 1000; i++)
{
go();
Thread.Sleep(200);
}
Console.WriteLine("press a key");
Console.ReadKey();
}
}
}
is there an easy way to explain this behavior? maybe c++ implementation if the TCP server will produce better results? maybe it is actually a client side problem?
Any comment will be welcomed !
ofer
Specify a huge listener backlog: http://msdn.microsoft.com/en-us/library/5kh8wf6s.aspx
Firstly a thread per connection design is unlikely to be especially scalable, you would do better to base your design on an asynchronous server model which uses IO Completion Ports under the hood. This, however, is unlikely to be the problem in this case as you're not really stressing the server that much.
Secondly the listen backlog is a red herring here. The listen backlog is used to provide a queue for connections that are waiting to be accepted. In this example your client uses a synchronous connect call which means that the client will never have more than 1 connect attempt outstanding at any one time. If you were using asynchronous connection attempts in the client then you would be right to look at tuning the listen backlog, perhaps.
Thirdly, given that the client code doesn't show that it sends any data, you can simply issue the read calls and remove the sleep that follows it, the read calls will block. The sleep just confuses matters.
Are you running the client and the server on the same machine?
Is this ALL the code in both client and server?
You might try and eliminate the client from the problem space by using my free TCP test client which is available here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
Likewise, you could test your test client against one of my simple free servers, like this one: http://www.lenholgate.com/blog/2005/11/simple-echo-servers.html
I can't see anything obviously wrong with the code (apart from the overall design).

C# server - is this good way for timeouting and disconnecting?

On my multithreaded server I am experiencincg troubles with connections that are not coming from the proper Client and so hang unathorized. I did not want to create new thread only for checking if clients are connected for some time without authorization. Instead of this, I have add this checking to RecieveData thread, shown on the code below. Do you see some performance issue or this is acceptable? The main point is that everytime client is connected (and Class client is instantionized) it starts stopwatch. And so I add to this thread condition - if the time is greater than 1 and the client is still not authorized, its added on the list of clients determinated for disconnection. Thanks
EDIT: This While(true) is RecieveData thread. I am using async. operations - from tcplistener.BeginAccept to threadpooling. I have updated the code to let you see more.
protected void ReceiveData()
{
List<Client> ClientsToDisconnect = new List<Client>();
List<System.Net.Sockets.Socket> sockets = new List<System.Net.Sockets.Socket>();
bool noClients = false;
while (true)
{
sockets.Clear();
this.mClientsSynchronization.TryEnterReadLock(-1);
try
{
for (int i = 0; i < this.mClientsValues.Count; i++)
{
Client c = this.mClientsValues[i];
if (!c.IsDisconnected && !c.ReadingInProgress)
{
sockets.Add(c.Socket);
}
//clients connected more than 1 second without recieved name are suspect and should be disconnected
if (c.State == ClientState.NameNotReceived && c.watch.Elapsed.TotalSeconds > 1)
ClientsToDisconnect.Add(c);
}
if (sockets.Count == 0)
{
continue;
}
}
finally
{
this.mClientsSynchronization.ExitReadLock();
}
try
{
System.Net.Sockets.Socket.Select(sockets, null, null, RECEIVE_DATA_TIMEOUT);
foreach (System.Net.Sockets.Socket s in sockets)
{
Client c = this.mClients[s];
if (!c.SetReadingInProgress())
{
System.Threading.ThreadPool.QueueUserWorkItem(c.ReadData);
}
}
//remove clients in ClientsToDisconnect
foreach (Client c in ClientsToDisconnect)
{
this.RemoveClient(c,true);
}
}
catch (Exception e)
{
//this.OnExceptionCaught(this, new ExceptionCaughtEventArgs(e, "Exception when reading data."));
}
}
}
I think I see what you are trying to do and I think a better way would be to store new connections in a holding area until they have properly connected.
I'm not positive but it looks like your code could drop a valid connection. If a new connection is made after the checking section and the second section takes more than a second all the timers would time out before you could verify the connections. This would put the new connections in both the socket pool AND the ClientsToDisconnect pool. Not good. You would drop a currently active connection and chaos would ensue.
To avoid this, I would make the verification of a connection a seperate thread from the using of the connection. That way you won't get bogged down in timing issues (well...you still will but that is what happens when we work with threads and sockets) and you are sure that all the sockets you are using won't get closed by you.
My gut reaction is that while (true) plus if (sockets.Count == 0) continue will lead to heartache for your CPU. Why don't you put this on a Timer or something so that this function is only called every ~.5s? Is the 1s barrier really that important?

Categories

Resources