How to create a custom thread pool - c#

I'm using C# with .NET Framework 4.5. I'm writing a server application that should be able to connect to arbitrarily many clients at once. So I have one thread that will listen for connections, and then it will send the connection to a background thread to go into a loop waiting for messages. Since the number of supportable client connections should be very high, spawning a new thread for every connection won't work. Instead what I need is a thread pool. However, I don't want to use the system thread pool because these threads will essentially be blocked in a call to Socket.Select indefinitely, or at least for the life of the connections they host.
So I think I need a custom ThreadPool that I can explicitly round-robin the connections over to. How to achieve this in C#?

There's no point in using threads for this - that's just wasting resources.
Instead, you want to use asynchronous I/O. Now, ignoring all the complexities involved with networking, the basic idea is something like this:
async Task Loop()
{
using(var client = new TcpClient())
{
await client.ConnectAsync(IPAddress.Loopback, 6536).ConfigureAwait(false);
var stream = client.GetStream();
byte[] outBuf = new byte[4096];
// The "message loop"
while (true)
{
// Asynchronously wait for the message
var read = await stream.ReadAsync(outBuf, 0, outBuf.Length);
// Handle the message, asynchronously send replies, whatever...
}
}
}
You can run this method for each of the connection you're making (without using await - very important). The thread handling that particular socket will be released on every await - the continuation of that await will be posted on a thread-pool thread.
The result being, the amount of threads used at any given time will tend to self-balance with the available CPU cores etc., while you can easily service thousands of connections at a time.
Do not use the exact code I posted. You need to add tons of error-handling code etc. Handling networking safely and reliably is very tricky.

Related

Poll a server, send messages and not blocking user interface

I'm working on a gui application (C#, .NET Core, WinUI 3) that connects to a remote device. The device acts like a server with a single available connection.
By the app's requirements, I need to periodically poll the device and allow a user to send commands. To solve this,
I create a single connection object (say, a tcp/ip socket)
I start the polling via pollingTask = Task.Start(polling.PollInfinite)
In the polling and the user's network operations, the connection singleton is locked (lock (connection) { connection.send/receive }).
The scheme above works well when I test it in a local network, so messages to the device does not mess up and the app's UI is not blocked (at least by my experience).
Now I imagine two situations
A non-local network may have a larger ping
A user may run the app on a computer with one thread
In each situation... Would the UI blocked? Could the user send commands while polling waits for the device response?
Also, I'm looking for general recommendations for meeting the app's requirements.
P.S. I'm not experienced with asynchronous programming, but suppose that it should be used anywhere in my app when a network operation happen.
P.S. I'm not experienced with asynchronous programming, but suppose that it should be used anywhere in my app when a network operation happen.
Yes, it was one of the main reasons for introduction of async-await into the language.
As for the actual problem there are several approaches to handle this. The most similar one to yours is to switch from lock (since it can't handle awaits) to SemaphoreSlim with one slot and use async versions of connection.send/receive (if they do not exist - wrap them into Task.Run):
SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1); // single instance
// ... somewhere in the code
await _semaphore.WaitAsync();
try
{
await connection.sendAsync/receiveAsync // or await Task.Run(() => connection.send/receive)
}
finally
{
_semaphore.Release();
}
For convenience you can wrap this into proxy for easier consumption.
Another approach is to implement something similar to queued background task in modern ASP.NET Core. Idea in the nutshell is quite simple - you create a dedicated Task (i.e. thread, see TaskCreationOptions.LongRunning) which infinitely monitors queue (concurrent, can be ConcurrentQueue<Func<Task<Action>>> or ConcurrentQueue<Func<Task<Action<YourConnectionObjectType>>>>) and synchronously processes queued items and add to this queue from other places in the app.

C# TCP Client in new thread?

I have a TcpListener listening for incoming connections, and now i basically want to ask if it is better to process the client communication in the same thread or start a new one; so if there is a best practice.
I intentionally didn't add the try-catch blocks and other handling to keep the question simple and clear.
Method 1:
while(true)
{
TcpClient client = listener.AcceptTcpClient();
processData(client);
}
Method 2:
while(true)
{
TcpClient client = listener.AcceptTcpClient();
new Thread(() => processData(client)).Start();
}
Method 3:
while(true)
{
TcpClient client = listener.AcceptTcpClient();
Thread t = new Thread(() => processData(client));
t.Start();
t.Join();
}
The code was written like Method 1 before, but processData randomly threw ThreadAbortExceptions, which shut down the entire server thread (probably because of some timeout with the client, could not exactly find the source of the Exception as the code runs on a .NET Compact framework on an Embedded Compact 2013 machine).
i basically want to ask if it is better to process the client communication in the same thread or start a new one; so if there is a best practice.
When you are using threads with sockets you could block the I/O channels each time that the thread which handle the socket is paused. And, your first case is better than third case, because you the program doesn't need to change of thread.
So, in my experience, the best way to process multiple socket is using a single thread and its asyncrhonous methods. It is faster and secure than multiple threads by sockets. With this technic you are optimize the machine resources and avoid the threads competing for them.
Check my answer to see how to use asynchronous methods into the socket. In this case I attended 15k socket request by hour and the the use of the processor does not exceed 1%.

How to implement an IIS-like threadpool on a worker-server

EDIT I realised my question was not stated clearly enough and have edited it heavily.
This is a bit of an open ended question so apologies in advance.
In a nutshell, I want to implement IIS-style asynchronous request processing in an Azure worker role.
It may be very simple or it may be insanely hard - I am looking for pointers to where to research.
While my implementation will use Azure Workers and Service Bus Queues, the general principle is applicable to any scenario where a worker process is listening for incoming requests and then servicing them.
What IIS does
In IIS there is a fixed-size threadpool. If you deal with all request synchronously then the maximum number of requests you can deal with in parallel == maxthreads. However, if you have to do slow external I/O to serve requests then this is highly inefficient because you can end up with the server being idle, yet have all threads tied up waiting for external I/O to complete.
From MSDN:
On the Web server, the .NET Framework maintains a pool of threads that are used to service ASP.NET requests. When a request arrives, a thread from the pool is dispatched to process that request. If the request is processed synchronously, the thread that processes the request is blocked while the request is being processed, and that thread cannot service another request.
This might not be a problem, because the thread pool can be made large enough to accommodate many blocked threads. However, the number of threads in the thread pool is limited. In large applications that process multiple simultaneous long-running requests, all available threads might be blocked. This condition is known as thread starvation. When this condition is reached, the Web server queues requests. If the request queue becomes full, the Web server rejects requests with an HTTP 503 status (Server Too Busy).
In order to overcome this issue, IIS has some clever logic that allows you to deal with requests asynchronously:
When an asynchronous action is invoked, the following steps occur:
The Web server gets a thread from the thread pool (the worker thread) and schedules it to handle an incoming request. This worker thread initiates an asynchronous operation.
The worker thread is returned to the thread pool to service another Web request.
When the asynchronous operation is complete, it notifies ASP.NET.
The Web server gets a worker thread from the thread pool (which might be a different thread from the thread that started the asynchronous operation) to process the remainder of the request, including rendering the response.
The important point here is when the asynchronous request returns, the return action is scheduled to run on one of the same pool of threads that serves the initial incoming requests. This means that the system is limiting how much work it is doing concurrently and this is what I would like to replicate.
What I want to do
I want to create a Worker role which will listen for incoming work requests on Azure Service Bus Queues and also potentially on TCP sockets. Like IIS I want to have a maxium threadpool size and I want to limit how much actual work the worker is doing in parallel; If the worker is busy serving existing requests - whether new incoming ones or the callbacks from previous async calls - I don't want to pick up any new incoming requests until some threads have been freed up.
It is not a problem to limit how many jobs I start concurrently - that is easy to control; It is limiting how many I am actually working on concurrently.
Let's assume a threadpool of 100 threads.
I get 100 requests to send an email come in and each email takes 5 seconds to send to the SMTP server. If I limit my server to only process 100 requests at the same time then my server will be unable to do anything else for 5 seconds, while the CPU is completely idle. So, I don't really mind starting to send 1,000 or 10,000 emails at the same time, because 99% of the "request process time" will be spent waiting for external I/O and my server will still be very quiet.
So, that particular scenario I could deal with by just keeping on accepting incoming requests with no limit (or only limit the start of the request until I fire off the async call; as soon as the BeginSend is called, I'll return and start serving another request).
Now, imagine instead that I have a type of request that goes to the database to read some data, does some heavy calculation on it and then writes that back to the database. There are two database requests there that should be made asynchronous but 90% of the request processing time will be spent on my worker. So, if I follow the same logic as above and keep start async calls and just letting the return do whatever it needs to get a thread to continue on then I will end up with a server that is very overloaded.
Somehow, what IIS does is make sure that when an async call returns it uses the same fixed-size thread pool. This means that if I fire off a lot of async calls and they then return and start using my threads, IIS will not accept new requests until those returns have finished. And that is perfect because it ensures a sensible load on the server, especially when I have multiple load-balanced servers and a queue system that the servers pick work from.
I have this sneaky suspicion that this might be very simple to do, there is just something basic I am missing. Or maybe it is insanely hard.
Creating a threadpool should be considered as independent of Windows Azure. Since a Worker Role instance is effectively Windows 2008 Server R2 (or SP2), there's nothing really different. You'd just need to set things up from your OnStart() or Run().
One thing you wanted to do was use queue length as a determining factor when scaling to more/less worker instances. Note that Service Bus Queues don't advertise queue length, where Windows Azure Queues (based on Storage, vs. Service Bus) do. With Windows Azure Queues, you'll need to poll synchronously for messages (whereas Service Bus Queues have long-polling operations). Probably a good idea to review the differences between Service Bus Queues and Windows Azure Queues, here.
Have you considered having a dedicated WCF instance (not WAS or IIS hosted) to buffer the long running requests? It will have its own dedicated app pool, with a separate Max value setting from IIS that won't contend with your ASP.NET HTTP requests. (HTTP requests are served by
Then use IIS Async methods to call WCF with the constrained app pool.
I've used the SmartThreadPool project in the past as a per-instance pool and, if I'm reading you correctly, it should have all the callback and worker-limiting functionality you need. My company actually has it running currently on Azure for the exact purpose you describe of reading message bus requests asynchronously.
I have been digging around in this and found that it is indeed relatively easy.
http://www.albahari.com/threading/ has got some good information and I actually ended up buying the book which that website is essentially promoting.
What I found out is that;
Your application has a ThreadPool available to it by default
You can limit the number of threads available in the ThreadPool
When you use QueueUserWorkItem or Task.Factory.StartNew the job you start run on a Thread in the ThreadPool
When you use one of the asynchronous IO calls in the framework (Begin... methods or WebcClient.DownloadStringAsync etc) the the callbacks will also run on a Thread from the ThreadPool (what happens with the IO request itself is outside the scope of this discussion).
So far, so good. The problem is that I can keep calling Task.Factory.StartNew as much as I like and the ThreadPool will simply queue up the work until there are free threads to service them. So, in the case of an Azure Worker, I could easily empty the Queue even though my worker is busy servicing existing requests (and callbacks from existing requests). That is the core of my problem. What I want is to not take anything out of the queue until I actually have some free threads to service the request.
This is a very simple example of how this could be achieved. In essence, I am using an AutoResetEvent to make sure that I don't start another task from the queue until the previous task has actually started. Granted, I do actually take stuff out of the queue before there is a free thread, but on balance this should avoid crazy overloads of the worker and allow me to spin up more workers to share the load.
ThreadPool.SetMaxThreads(5, 1000); // Limit to 5 concurrent threads
ThreadPool.SetMinThreads(5, 10); // Ensure we spin up all threads
var jobStart = new AutoResetEvent(true);
// The "listen" loop
while (true)
{
var job = this.jobQueue.Dequeue();
jobStart.WaitOne(); // Wait until the previous job has actually been started
Task.Factory.StartNew(
() =>
{
jobStart.Set(); // Will happen when the threadpool allocates this job to a thread
this.Download(job);
});
}
This can - and probably should - be made a lot more sophisticated, including having timeouts, putting the work item back in the queue if a thread can't be allocated within a reasonable time and so on.
An alternative would be to use ThreadPool.GetAvailableThreads to check if there are free threads before starting to listen to the queue but that feels rather more error prone.
Somehow, what IIS does is make sure that when an async call returns
it uses the same fixed-size thread pool.
This is not true: When your code runs in response to an HTTP-Request you decide on what threads the continuation function executes. Usually, this is the thread pool. And the thread pool is an appdomain-wide resource that is shared among all requests.
I think IIS does less "magic" than you think it does. All it does is to limit the number of parallel HTTP-requests and the backlog size. You decide what happens once you have been given control by ASP.NET.
If your code is not protected against overloading the server, you will overload the server even on IIS.
From what I understand you want to constrain the number of threads used for processing a certain type of message at the same time.
One approach would be to simply wrap the message processor, invoked on a new thread with something like
try
{
Interlocked.Increment(ref count)
Process(message);
}
finally
{
Interlocked.Decrement(ref count)
}
Before invoking the wrapper, simply check if the ‘count’ is less than your threshold count; and stop polling/handling more messages till the count is sufficiently lower.
EDIT Added more information based on comment
Frans, not sure why you see the infrastructure and business code being coupled. Once you place your business process to be serviced as a task on a new thread to run asynchronously, you need not worry about performing additional IO bound calls asynchronously. This is a simpler model to program in.
Here is what I am thinking.
// semi - pseudo-code
// Infrastructure – reads messages from the queue
// (independent thread, could be a triggered by a timer)
while(count < maxCount && (message = Queue.GetMessage()) != null)
{
Interlocked.Increment(ref count);
// process message asynchronously on a new thread
Task.Factory.StartNew(() => ProcessWrapper(message));
}
// glue / semi-infrastructure - deals with message deletion and exceptions
void ProcessWrapper(Message message)
{
try
{
Process(message);
Queue.DeleteMessage(message);
}
catch(Exception ex)
{
// Handle exception here.
// Log, write to poison message queue etc ...
}
finally
{
Interlocked.Decrement(ref count)
}
}
// business process
void Process(Message message)
{
// actual work done here
;
}

Sequential access to asynchronous sockets

I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.
The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.
The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.
I do not want to block threads from sending requests to different clients at the same time.
E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.
I was thinking of using a simple lock statement on the socket:
lock (c3Socket)
{
// Send request to C3
// Get response from C3
}
I am using asynchronous sockets, so I may have to use Monitor instead:
Monitor.Enter(c3Socket); // Before calling .BeginReceive()
And
Monitor.Exit(c3Socket); // In .EndReceive
I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.
Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.
Am I overlooking anything here? Any input appreciated.
My answer here would be a state machine per socket. The states would be free and busy:
If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
You might want to setup a timeout on that wait just in case a client gets stuck somehow.
If the state is busy - the thread sleeps, waiting for signal.
When that client-related timeout expires - close the socket, the client is dead.
When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)
Hope this helps.
You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.
I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.

Create multiple TCP Connections in C# then wait for data

I am currently creating a Windows Service that will create TCP connections to multiple machines (same socket on all machines) and then listen for 'events' from those machines. I am attempting to write the code to create a connection and then spawn a thread that listens to the connection waiting for packets from the machine, then decode the packets that come through, and call a function depending on the payload of the packet.
The problem is I'm not entirely sure how to do that in C#. Does anyone have any helpful suggestions or links that might help me do this?
Thanks in advance for any help!
Depending on how many concurrent clients you plan on supporting, a thread-per-connection architecture will probably break down very quickly. Reason being, each thread requires significant resources. By default each .NET thread gets 1MB of stack space so that's 1MB per connection plus any overhead.
Instead when supporting multiple connected clients typically you will use the asynchronous methods (see here also) which are very efficient because Windows will use "completion ports" which basically free up the thread to do other things while waiting on some event to complete.
For this you would look at methods such as BeginAccept, BeginReceive, BeginSend, etc.
A simpler approach which also avoids making blocking calls and avoids multiple threads is to use the Socket.Select method in a loop. This allows a single thread to service multiple sockets. The thread can only physically read or write to a single socket at a time but the idea is that you are checking the state of multiple sockets which may or may not contain data to read.
In any case, the thread-per-connection approach is much simpler to get your head around at first, but it does have significant scalability problems. I would suggest doing that first with the synchronous methods like Accept, Receive, Send, etc. Then later on refactor your code to use the asynchronous methods so that you don't exhaust the server's memory.
You can have asynchronous receive for every socket connection and decode the data coming from other machines to perform your tasks (You can find some useful information about asynchronous methods here: http://msdn.microsoft.com/en-us/library/2e08f6yc.aspx).
To create a connection, you can do:
Socket sock = Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
sock.Connect(new IPEndPoint(address, port));
Same way you can create multiple connections and keep there references in a List of Dictionary (whichever you prefer).
For receiving data asynchronously on a socket, you can do:
sock.BeginReceive(buffer, 0, buffer.len, SocketFlags.None, new AsyncCallback(OnDataReceived), null);
The method OnDataReceived will be called when receive operation completes.
In OnDataReceived method you have to do:
void OnDataReceived(IAsyncResult result)
{
int dataReceived = sock.EndReceive(result);
//Make sure that dataReceived is equal to amount of data you wanted to receive. If it is
//less then the data you wanted, you can do synchronous receive to read remaining data.
//When all of the data is recieved, call BeginReceive again to receive more data
//... Do your decoding and call the method ...//
}
I hope this helps you.
Have a single thread that runs the accept() to pick up new connections. For each new connection you get, spawn a worker thread using the thread pool.
I don't know if it's possible in your situation but have you thought about using a WCF service that gets called by the multiple machines ? You can host this in a custom windows service or IIS. It will consume very little resource while waiting for events and it's much simpler to code than all that low level scary socket stuff. It's automatically async. You get nice messages to your service rather than a packet you need to deserialize and/or parse. You can use any number of protocols such as REST or binary.
You will of course need to create the process on the other end that sends the messages.
Just a thought...
Cheers

Categories

Resources