Nonblocking Tcp server - c#

It's not a question really, i'm just looking for some guidelines :)
I'm currently writing some abstract tcp server which should use as low number of threads as it can.
Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket.
So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem):
List<Socket> readSockets = new List<Socket>();
List<Socket> writeSockets = new List<Socket>();
List<Socket> errorSockets = new List<Socket>();
while( true ){
Socket.Select( readSockets, writeSockets, errorSockets, 10 );
foreach( readSocket in readSockets ){
// do reading here
}
foreach( writeSocket in writeSockets ){
// do writing here
}
// POINT2 and here's the problem i will describe below
}
it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send->receive->disconnect routine it's not that painful, but if I try to keep alive doing send->receive->send->receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :(
Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

Take a look at the TcpListener class first. It has a BeginAccept method that will not block, and will call one of your functions when someone connects.
Also take a look at the Socket class and its Begin methods. These work the same way. One of your functions (a callback function) is called whenever a certain event fires, then you get to handle that event. All the Begin methods are asynchronous, so they will not block and they shouldn't use 100% CPU either. Basically you want BeginReceive for reading and BeginSend for writing I believe.
You can find more on google by searching for these methods and async sockets tutorials. Here's how to implement a TCP client this way for example. It works basically the same way even for your server.
This way you don't need any infinite looping, it's all event-driven.

Are you creating a peer-to-peer application or a client server application? You got to consider how much data you are putting through the sockets as well.
Asynchronous BeginSend and BeginReceive is the way to go, you will need to implement the events but it's fast once you get it right.
Probably don't want to set your Send and Receive timeouts too high as well, but there should be a timeout so that if nothing is receive after a certain time, it will come out of the block and you can handle it there.

Microsoft has a nice async TCP server example. It takes a bit to wrap your head around it. It was a few hours of my own time before I was able to create the basic TCP framework for my own program based on this example.
http://msdn.microsoft.com/en-us/library/fx6588te.aspx
The program logic goes kind of like this. There is one thread that calls listener.BeginAccept and then blocks on allDone.WaitOne. The BeginAccept is an async call which gets offloaded to the threadpool and handled by the OS. When a new connection comes in, the OS calls the callback method passed in from BeginAccept. That method flips allDone to let the main listening thread to know it can listen once again. The callback method is just a transitionary method and continues on to call yet another async call to receive data.
The callback method supplied, ReadCallback, is the primary work "loop"(effectively recursive async calls) for the async calls. I use the term "loop" loosely because each method calls actually finishes, but not before calling the next async method. Effectively, you have a bunch of async calls all calling each other and you pass around your "state" object. This object is your own object and you can do whatever you want with it.
Every callback method will only get two things returned when the OS calls your method:
1) Socket Object representing the connection
2) State object with which you use for your logic
With your state object and socket object, you can effectively handle your "connections" asynchronously. The OS is VERY good at this.
Also, because your main loop blocks waiting for a connection to come it and off-loads those connections to the thread pool via async calls, it remains idle most of the time. The thread pool for your sockets is handled by the OS via completion ports, so they don't do any real work until data comes in. Very little CPU is used and it's effectively threaded via the thread pool.
P.S. From what I understand, you don't want to do any hard work with these methods, just handling the movement of the data. Since the thread pool is the pool for your Network IO and is shared by other programs, you should offload any hard work via threads/tasks/async as to not cause the socket thread pool to get bogged down.
P.P.S. I haven't found a way of closing the listening connection other than just disposing "listener". Because the async call for beginListen is called, that method will never return until a connection comes in, which means, I can't tell it to stop until it returns. I think I'll post a question on MSDN about it. and link if I get a good response.

Everything is fine is your code exept timeout value. You set it to 10 microseconds (10*10^-6) so your while routine iterates very often. You should set and adequate value (10 seconds for example) and your code will not eat 100% CPU.
List<Socket> readSockets = new List<Socket>();
List<Socket> writeSockets = new List<Socket>();
List<Socket> errorSockets = new List<Socket>();
while( true ){
Socket.Select( readSockets, writeSockets, errorSockets, 10*1000*1000 );
foreach( readSocket in readSockets ){
// do reading here
}
foreach( writeSocket in writeSockets ){
// do writing here
}
// POINT2 and here's the problem i will describe below
}

Related

Socket ReceiveAsync, Timeouts and Questions

I need to reimplement a database connection driver for some legacy cobol database for one of my customers. The way the application is built, i cannot use async/await (just leave it like that, i know it is stupid).
The whole application is an ASP.NET API.
The old driver uses a c++ dll, that is included with inter-op methods. The idea behind the old system is: use one connection to the db for everything, have multiple threads send a packet and have one thread that receives the answers and delegates them to the right thread.
To keep the connection alive, one needs to send some sort of ping message to database and handle its pong message.
I reimplemented that as POC in c#, have one connection, open a background thread and use AutoResetEvents to notify the right threads that the answer is ready to be processed. I set the ReceiveTimeout to 5 seconds, and while there was nobody sending data to the server, the receive timeout helped me to send the ping-message to the server.
A reason for the rewrite is, that the one-connection-solution does not scale.
So, my idea is to use a socket pool and ReceiveAsync with SocketAsyncEventArgs on the sockets.
The solution works so far, but not really good. Here are some questions:
As ReceiveTimeout is not compatible with ReceiveAsync, is there a other way then a timer to send my ping-messages
when using ReceiveAsync, can i still use normal Send to send data, or do i have to use SendAsync?
when ReceiveAsync does not receive all required data, may i use Receive to read the rest of it, or is it better to use ReceiveAsync again for the missing data?
Maybe not relevant: I use Artillery to fire some performance tests on the new driver; from time to time they timeout after 30 seconds (thats the db-transaction timeout i set); when i try to debug that Artillery gets ESOCKETTIMEDOUT even though no breakpoint is hit - is this a known behaviour when debugging an IIS process under load?
use AutoResetEvents to notify the right threads that the answer is ready to be processed.
May I suggest a thread-safe queue? BlockingCollection<T> or BufferBlock<T>?
I set the ReceiveTimeout to 5 seconds, and while there was nobody sending data to the server, the receive timeout helped me to send the ping-message to the server.
This is weird. I assume the entire protocol is ping-pong based, or else using a receive timeout to send messages would not work.
my idea is to use a socket pool and ReceiveAsync with SocketAsyncEventArgs on the sockets
If you can't use async/await, I would advise switching to the Begin*/End* style of asynchronous API. Going straight from synchronous to SocketAsyncEventArgs is quite a leap; SocketAsyncEventArgs is the most difficult form of socket async programming.
is there a other way then a timer to send my ping-messages
I would recommend a timer; that's the normal solution for heartbeat messages. The desired semantics should be "we want to send data at least this often". So use a timer that you can reset when sending regular messages (not receiving messages).
when using ReceiveAsync, can i still use normal Send to send data, or do i have to use SendAsync?
You should be able to use synchronous for one stream and asynchronous for the other. I've never tried this, though; all systems I've worked on are fully asynchronous.
when ReceiveAsync does not receive all required data, may i use Receive to read the rest of it, or is it better to use ReceiveAsync again for the missing data?
This question doesn't make as much sense to me. If you're asynchronously reading, you shouldn't block the calling thread.
Also, I think this question is framed from the wrong perspective. It seems like the code wants to "receive the next message", but this is a problematic way to approach reading from a socket. Instead, I recommend that your code have a loop that endlessly reads from the socket and passes that data to another type that buffers it as necessary and pushes out messages as they finish.
is this a known behaviour when debugging an IIS process under load?
I would not expect so, but I don't have much IIS load testing experience.

Serialport DataRecieved threads ending before execution

I have noticed that when the new thread is started from serialport Data received event that if the plan of execution includes just a few methods that may change some value and send on another port then it works fine, but if the method needs to do more extensive processing like sending on another port and waiting for ACK, send again and receiving decent sized amounts of data (20KB) in 256 byte packets then the thread just stops somewhere and never completes. When the code is stepped through it seems to work fine. I have read other topics of people asking about this issue but there was no "solution" just to use another method like timers to poll the ports instead. I even made a workaround by having the main thread "poll" a variable that is changed from the event rather then having the event do the work and this seems to work, but when using a windows form I had to create a new thread which seems to be doing the same thing and either not completing the code or not executing the new thread which is just a while look that runs forever checking a variable. I can provide code if needed just wanted some insight on how to address this properly.
Nobody here knew the answer to the questions or explained limitations, but i was able to get around the issues using timers to run while loops checking for variable changes and starting threads that did the same.

How to correctly use TPL with TcpClient?

I wrote a server using TcpListener that is supposed to handle many thousands of concurrent connections.
Since I know that most of the time most connections will be idle (with the occasional ping-pong to make sure the other side is still there) async programming seemed to be the solution.
However after the first few hundred clients performance rapidly deteriorates. So rapidly in fact that I can barely reach 1000 concurrent connections.
The CPU is not maxed out (averaging at ~4%), RAM usage is <100MB, and there's not a lot of network traffic going on.
When I pause the server in Visual Studio and take a look at the 'Tasks' window, there are countless (hundreds) tasks with status "scheduled" and only few (less than 30) "running/active" tasks.
I tried to profile using Visual Studio as well as dotTrace Peformacne, but I couldn't find anything wrong. No lock contention, no "hot path" where a lot of CPU is used.
It seems like the application just slows down overall.
The setup
I have a simple while(true) and inside it there's this:
var client = await tcpListener.AcceptTcpClientAsync().ConfigureAwait(false);
Task.Run(() => OnClient(client));
In order to handle the connection I made a few methods to encapsulate the different stages of the connection.
For example inside the OnClient above there's await HandleLogin(...), and then it enters a while(client.IsConnected) loop that just does await stream.ReadBuffer(1). stream is just the normal NetworkStream that you get from TcpClient.GetStream, and ReadBuffer is a custom method implemented like this:
public static async Task<byte[]> ReadBuffer(this Stream stream, int length)
{
byte[] buffer = new byte[length];
int read = 0;
while (read < length)
{
int remaining = length - read;
int readNow = await stream.ReadAsync(buffer, read, remaining).ConfigureAwait(false);
read += readNow;
if (readNow <= 0)
throw new SocketException((int)SocketError.ConnectionReset);
}
return buffer;
}
I use .ConfigureAwait(false) at every single place where I await anything because I have need for any sort of synchronization context, and I don't want to pay the performance overhead of retreiving/creating a synchronization context everywhere.
One thing I noticed is that when I spawn 50 connections from my test-tool and then randomly just close it (so all connections it made should receive a ConnectionReset SocketException on the server) it takes a long time for the server to react at all oftentimes hanging completely until a new connection arrives.
Could it be that somehow some continuations want to synchronize and run on some specific thread somehow?
It's possible (when disconnecting at the right moment) to make the server application almost unusable with as few as 20 connections.
What am I doing wrong?
If it is some bug (which I assume it is), how would I go about finding it?
I narrowed the problem down to many Tasks just sitting at NetworkStream.ReadAsync(...) even though they should instantly receive a SocketException (ConnectionReset).
I tried starting my test tool (which is just using TcpClient) on a remote machine as well as locally and I get the same results.
Edit 1
My OnClient is defined as async Task OnClient(TcpClient client). Inside it, it awaits the different stages of the connection: authentication, some settings negotiation, then entering a loop where it waits for messages.
I use Task.Run because I do not want to wait until one client is done, but I want to accept all clients as fast as possible, spawning a new Task for each one. I am however unsure if I couldn't/shouldn't just write OnClient(client) without the Task.Run around it and also without awaiting OnClient (would result in a hint that doesn't go away but it is what I want I think, I don't want to wait until the client is done).
The last stage
The last stage the connection enters after authentication and settings negotaion is a loop where the server waits for messages from the client.
However before that the server also does another Task.Run() (with while(is connected) and await Task.Delay...) to send ping packets and a few other "management" things.
All writes into the NetworkStream are synchronized by using the lock mechanism from the Nito AsyncEx library to make sure no packets are somehow interleaved.
If any exception happen anywhere (when reading or writing) I always call .Close on the TcpClient to make sure all other pending incomplete reads and writes throw an exception.
I narrowed the problem down to many Tasks just sitting at NetworkStream.ReadAsync(...) even though they should instantly receive a SocketException (ConnectionReset).
This is an incorrect assumption. You have to write to the socket to detect dropped connections.
This is one of many pitfalls of TCP/IP programming, which is why I recommend people use SignalR if at all possible.
Other pitfalls that jump out from your code/description:
You're attempting to use asynchronous APIs, but your code also has Task.Run. So it's still doing a thread jump right away. This may be desirable or it may not. (Assuming OnClient is an async method; if it's using sync-over-async, then it's definitely not a good pattern).
while(client.IsConnected) is a common incorrect pattern. You should have both a read loop and write queue processor running simultaneously. In particular, IsConnected is absolutely meaningless - it literally only means that the socket was connected at some point in the past. It does not mean that it is still connected. If code has IsConnected, then there's a bug.

multi-threading safe usage with SerialPort controller

I have read dozens of articles about threading in c# and Application.DoEvents() ... Still can't use it properly to get my task done:
I have a controller connected to my COM, this controller works on command (i send command, need to wait few ms to get response from it), assume the response is a data that i want to plot every time interval using a loop:
start my loop.
send command to controller via serialPort.
wait for response (wait let say 20 ms).
obtain data.
repeat this loop every let say 100 ms.
this simply doesn't want to work!! i tried to communicate with the data controller on other thread but it seems that it can't access the serialPort which belongs to the main thread (roughly speaking).
any help is appreciated
Application.DoEvents is for all it does - nothing more than a nested call to a windows (low level) message loop on the same thread. Which might easily cause recursion if you call it in in an event handler. You might consider creating your serial port object on the worker thread and communicate through threading classes (i.e. the WaitHandles and similar). Or call back to your UI thread using "BeginInvoke" and "EndInvoke" on the UI object.
If you catch the SerialPort.DataReceived event and then use wither SerialPort.ReadLine or SerialPort.Read(byte[],int,int) those methods will be executed on a new thread. I prefer to use a mutex to control access to the buffer of bytes as a shared resource. Also have you ever communicated with your device successfully? If not in addition to the port setting check the SerialPort.NewLine property and the SerialPort.Handshake property. These settings vary depending on the device you are trying to communicate with.
Why do you use it to begin with?
Have a look at this pages, it might give you a direction
My favorite: Is DoEvents Evil?
From msdn blog Keeping your UI Responsive and the Dangers of Application.DoEvents
From msdn forums Application does not return from call to DoEvents
Without code, it'll be hard to help. Even with code, it might be hard to help :)
I'm agreeing with gunr2171 on this :)

C# multiple threads and serial communication

Today I come across some strange behavior. I have a serial device that I access using the SerialPort class. The main application has some timer that polls once every second the device for some status update. At a certain point I need to do some time consuming work and therefore not to block the GUI I used a Backgroundworker. The backgroundworker needs once to access the same serial device. Sometimes the access works sometimes not. Classical mutli-thread scenario. So I tried using a Mutex on the function that sends the new command to the serial device.
For the serial device I put everything together in it's own class. In this class I have a sendCommand() function thats write the command to the device and uses a AutoResetEvent and the OnDataReceived Event to wait for the answer. The function sendCommand blocks until the answer is received or a timeout occours. I then added the Mutex when entering the sendCommand and the releaseMutex on all possible exits. Still does not work.
Is there a better way to handle this?
Thanks,
Tobias
I have an application that does this exact same thing -- what I did was I created a serial access class and, whenever I would call it (from either the GUI or one of my background threads) I would have the following:
private void myFunction(SerialClass myserialobject) {
if (myserialobject == null)
return;
lock (myserialobject) {
// code accessing the serial object
// ...
// when finished, close the lock statement
}
}
I used this in both the main thread and any other threads requiring access. It is blocking, but I believe it's a blocking statement.
Also, instead of using an event handler for OnDataReceived event I made my serial object perform a blocking read after any write, that way it prevented any data from being received in the wrong context. I'm not sure how exactly your program is set up but you may want to consider doing that. It works best if you know the number of bytes you expect to read upon writing to the port; that way you won't have to use Sleep to make sure all data was read.
What I usually do is run the serial read/write thread in a loop, reading commands from a BlockingQueue with a timed wait. If a serial request object is received within the timeout, the thread executes it, if the wait times out, the thread peforms a poll of the serial device.

Categories

Resources