C#/.NET application design with multi-threading/tasks - c#

I want to design an application that provides some bi-directional communcation between two otherwise completely separate systems. (bridge)
One of them can call my application via web services - the other one is a piece of 3rd party hardware. It speaks RS232. Using a RS232<->ETH transceiver we manage to talk to the piece of hardware using TCP.
The program has the following requirements.
There is a main thread running the "management service". This might be a WCF endpoint or a self-hosted webapi REST service for example. It provides methods to start new worker instances or get a list of all worker instances and their respective states.
There are numerous "worker" threads. Each of them has a state model with 5 states.
In one state for example a TCP listener must be spawned to accept incoming connections from a connected hardware device (socket based programming is mandatory). Once it gets the desired information it sends back a response and transitions into the next state.
It should be possible from the main (manager) thread to (gracefully) end single worker threads (for example if the worker thread is stuck in a state where it cannot recover from)
This is where I am coming from:
I considered WCF workflow services (state model activity) however I wasn't sure how to spawn a TcpListener there - and keep it alive. I do not need any workflow "suspend/serialize and resume/deserialize" like behavior.
The main thread is probably not that much of a concern - it just has to be there and running. It's the child (background) threads and their internal state machine that worry me.
I tried to wrap my mind around how Tasks might help here but I ended up thinking threads are actually a better fit for the task
Since there has been a lot of development in .NET (4+) I am not sure which approach to follow... the internet is full of 2005 to 2010 examples which are probably more than just outdated. It is very difficult to separate the DOs from the DONTs.
I'm glad for any hints.
UPDATE: Okay I'll try to clarify what my question is...
I think the easiest way is to provide some pseudo code.
public static void Main()
{
// Start self-hosted WCF service (due to backwards compatibility, otherwise I'd go with katana/owin) on a worker thread
StartManagementHeadAsBackgroundThread();
// Stay alive forever
while(running)
{
// not sure what to put here. Maybe Thread.Sleep(500)?
}
// Ok, application is shutting down => somehow "running" is not true anymore.
// One possible reason might be: The management service's "Shutdown()" method is being called
// Or the windows service is being stopped...
WaitForAllChildrenToReachFinalState();
}
private static void StartManagementHeadAsBackgroundThread()
{
ThreadStarter ts = new ThreadStarter(...);
Thread t = new Thread(ts);
t.Start();
}
The management head (= wcf service) offers a few methods
StartCommunicator() to start new worker threads doing the actual work with 5 states
Shutdown() to shut down the whole application, letting all worker threads finish gracefully (usually a question of minutes)
GetAllCommunicatorInstances() to show a summary of all worker threads and the current state they are in.
DestroyCommunicatorInstance(port) to forcefully end a worker thread - for example if communicator is stuck in a state where it cannot recover from.
Anyway I need to spawn new background threads from the "management" service (StartCommunicator method).
public class Communicator
{
private MyStateEnum _state;
public Communicator(int port)
{
_state = MyStateEnum.Initializing;
// do something
_state = MyStateEnum.Ready;
}
public void Run()
{
while(true)
{
// again a while(true) loop?!
switch(_state):
{
case MyStateEnum.Ready:
{
// start TcpListener - wait for TCP packets to arrive.
// parse packets. If "OK" set next good state. Otherwise set error state.
}
}
if(_state == MyStateEnum.Error) Stop();
break;
}
}
public void Stop()
{
// some cleanup.. disposes maybe. Not sure yet.
}
}
public enum MyStateEnum
{
Initializing, Ready, WaitForDataFromDevice, SendingDataElsewhere, Done, Error
}
So the question is whether my approach will take me anywhere or if I'm completely on the wrong track.
How do I implement this best? Threads? Tasks? Is while(true) a valid thing to do? How do I interact with the communicator instances from within my "management service"? What I am looking for is an annotated boiler plate kind of solution :)

My suggestion would be to use a ASP.NET Web API service and mark the Controller actions as async. From that point on, use Tasks as much as possible and when you end up with blocking IO the HTTP server will not be blocked. Personally, I would avoid Threads until you are absolutely sure that you can't achieve the same thing with Tasks.

I would recommend looking at using a thread pool. This will help with managing resources and make for more efficient use of the resources.
As far as terminating threads, thread pool threads are background workers and will be terminated when your service stops, however, from your description above that is not sufficient. Your threads should always have the ability to receive a message asking them to terminate.

Related

Running socket.EndReceive(ar)/BeginReceive in different thread in C#

I am writing a server/client socket app in C# on Windows 10 platform.
The server side code and GUI code are running in the same process. On the server side I am trying to optimize my code as I can have 255 socket client connections. I have followed Microsoft “Synchronous Server Socket Example”. I have moved (from their example) all the logic of
“public static void ReadCallback(IAsyncResult ar)” to:
Task.Run(() =>
{
ReadCallback(IAsyncResult ar);
});
.. , hoping that “ReadCallback” will be called from a different thread, therefore releasing call-back socket thread ASAP.
Is that going to create a problem since I have now call ‘socket.EndReceive(ar)’ and ‘socket.BeginReceive(…)’ called possibly from different thread?
The code is still working but I don’t know whether it is by accident or by my design. Please comment on that.
According to .Net documentation all Socket calls are threadsafe.
Thread Safety
Instances of this class are thread safe.
I have written other code with Begin.. and End.. in different threads. However in hindsight it wasn't the best way to do it.
I would suggest a Task per TcpClient connection, handling the in and the out, if you really don't need and synchronization between the In and Out then that Task could spawn a In Task and an Out Task but still be under the control of the TcpClient Task.
For high performance IO, I would suggest checking out System.IO.Pipelines

Gracefully handling shutdown of a Windows service

Assume that you have a multi-threaded Windows service which performs lots of different operations which takes a fair share of time, e.g. extracting data from different data stores, parsing said data, posting it to an external server etc. Operations may be performed in different layers, e.g. application layer, repository layer or service layer.
At some point in the lifespan of this Windows service you may wish to shut it down or restart it by way of services.msc, however if you can't stop all operations and terminate all threads in the Windows service within the timespan that services.msc expects to be done with the stop procedure, it will hang and you will have to kill it from Task Manager.
Because of the issue mentioned above, my question is as follows: How would you implement a fail-safe way of handling shutdown of your Windows service? I have a volatile boolean that acts as a shutdown signal, enabled by OnStop() in my service base class, and should gracefully stop my main loop, but that isn't worth anything if there is an operation in some other layer which is taking it's time doing whatever that operation is up to.
How should this be handled? I'm currently at a loss and need some creative input.
I would use a CancellationTokenSource and propagate the cancellation token from the OnStop method down to all layers and all threads and tasks started there. It's in the framework, so it will not break your loose coupling if you care about that (I mean, wherever you use a thread/Task you also have `CancellationToken' available.
This means you need to adjust your async methods to take the cancellation token into consideration.
You should also be aware of ServiceBase.RequestAdditionalTime. In case it is not possible to cancel all tasks in due time, you can request an extension period.
Alternatively, maybe you can explore the IsBackground alternative. All threads in your windows service that have this enabled are stopped by the CLR when the process is about to exit:
A thread is either a background thread or a foreground thread.
Background threads are identical to foreground threads, except that
background threads do not prevent a process from terminating. Once all
foreground threads belonging to a process have terminated, the common
language runtime ends the process. Any remaining background threads
are stopped and do not complete.
After more research and some brainstorming I came to realise that the problems I've been experiencing were being caused by a very common design flaw regarding threads in Windows services.
The design flaw
Imagine you have a thread which does all your work. Your work consists of tasks that should be run again and again indefinitely. This is quite often implemented as follows:
volatile bool keepRunning = true;
Thread workerThread;
protected override void OnStart(string[] args)
{
workerThread = new Thread(() =>
{
while(keepRunning)
{
DoWork();
Thread.Sleep(10 * 60 * 1000); // Sleep for ten minutes
}
});
workerThread.Start();
}
protected override void OnStop()
{
keepRunning = false;
workerThread.Join();
// Ended gracefully
}
This is the very common design flaw I mentioned. The problem is that while this will compile and run as expected, you will eventually experience that your Windows service won't respond to commands from the service console in Windows. This is because your call to Thread.Sleep() blocks the thread, causing your service to become unresponsive. You will only experience this error if the thread blocks for longer than the timeout configured by Windows in HKLM\SYSTEM\CurrentControlSet\Control\WaitToKillServiceTimeout, because of this registry value this implementation may work for you if your thread is configured to sleep for a very short period of time and does it's work in an acceptable period of time.
The alternative
Instead of using Thread.Sleep() I decided to go for ManualResetEvent and System.Threading.Timer instead. The implementation looks something like this:
OnStart:
this._workerTimer = new Timer(new TimerCallback(this._worker.DoWork));
this._workerTimer.Change(0, Timeout.Infinite); // This tells the timer to perform the callback right now
Callback:
if (MyServiceBase.ShutdownEvent.WaitOne(0)) // My static ManualResetEvent
return; // Exit callback
// Perform lots of work here
ThisMethodDoesAnEnormousAmountOfWork();
(stateInfo as Timer).Change(_waitForSeconds * 1000, Timeout.Infinite); // This tells the timer to execute the callback after a specified period of time. This is the amount of time that was previously passed to Thread.Sleep()
OnStop:
MyServiceBase.ShutdownEvent.Set(); // This signals the callback to never ever perform any work again
this._workerTimer.Dispose(); // Dispose of the timer so that the callback is never ever called again
The conclusion
By implementing System.Threading.Timer and ManualResetEvent you will avoid your service becoming unresponsive to service console commands as a result of Thread.Sleep() blocking.
PS! You may not be out of the woods just yet!
However, I believe there are cases in which a callback is assigned so much work by the programmer that the service may become unresponsive to service console commands during workload execution. If that happens you may wish to look at alternative solutions, like checking your ManualResetEvent deeper in your code, or perhaps implementing CancellationTokenSource.

ASP.NET long running task. Thread is being aborted exception

An ASP.NET 3.5 webapp has to start several tasks that takes hours to complete. For obvious reasons the pages which starts these tasks cannot wait for them to finish nor will anyone want to wait that long to get a response, so the tasks must be asynchronous.
There is a Helper class to handle all of these long running tasks. The main method that schedules and executes these tasks is currently the following:
public static bool ScheduleTask(TaskDescriptor task, Action action)
{
bool notAlreadyRunning = TasksAsync.TryAdd(task);
if (notAlreadyRunning)
{
Thread worker = null;
worker = new Thread(() =>
{
try { action(); }
catch(Exception e)
{
Log.LogException(e, "Worker");
}
TasksAsync.RemoveTask(task);
workers.Remove(worker);
});
workers.Add(worker);
worker.Start();
}
return notAlreadyRunning;
}
On earlier implementations we've used the ThreadPool.QueueUserWorkItem approach but the result has always been the same: after aprox. 20-30 mins a Thread was being aborted exception is thrown.
Does anyone know why is this happening? or how can it be prevented?
More Info:
IIS standard configuration.
Tasks could be anything, querys to a database and/or IO operations etc.
UPDATE: Decisions
Thank you all for your responses. Now I don´t know which question to mark as answer. All of them are valid and are possible solutions to this problem. Will wait for today and mark as answer the answer with the most up votes, in case of a draw I will choose the first shown answer, typically they are ordered by most relevance.
For anyone who want´s to know the solution I choose, again due to time restrictions, was to change the IIS recycling configuration, But what I consider to be the ideal solution, based on my research and of course the answers below, is to create a "Worker Service" and use a communication solution between the ASP.NET App and the new "Worker Service" to coordinate the long running work to be done.
You can start the long-running process in its own application domain.
In the past, when I've needed this capability, I create a Windows Service for this purpose. If you use WCF to connect to it, it doesn't even have to run on the IIS machine at all; you can run it on any machine on the network.
Chances are you can get this working, by upping the timeout, using a different app pool or a variety of other hacks, but your best bet is going to be to decouple the long running task from the ui and asp.net completely, and use either a service (wouldn't recommend it) or a scheduled task that polls for work to do; personally I would use something like aws sqs/sns to keep track of work to be done and a scheduled task in windows server that checks for things todo at whatever frequency make sense. The only thing the ui/asp.net then needs to do is log that fact that something needs to be done, not actually do it.
Another benefit of this message based approach is should the long running process become so long running, or so overworked, you'd have the opportunity to add more worker tasks or servers to complete those requests.
Perhaps more than you can implement for your immediate problem, but something to consider for a better long term solution.

C# switch from single to multi thread

I have a C# HttpListener that runs on a single thread and parses data sent to it by another program. My main problem is not all the data sent to the server is received. I only assume this is due to the limitations of it being run on a single thread. I have searched high and low for a simple multi-threading solution so it may receive al the data sent to it, and came up empty handed. Any help in transforming this into a multi-threaded application would be much appreciated.
private void frmMain_Load(object sender, EventArgs e)
{
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
}
public static void ThreadProc()
{
while (true)
{
WebBot.SimpleListenerExample(new string[] { "http://localhost:13274/" });
//Thread t = new Thread(new ThreadStart(ThreadProc));
//t.Start();
Application.DoEvents();
}
}
First thing first: verify that your hypothesis is indeed correct. You need to check:
How much data is sent
How much data is received
How long does it take to send the data
How long does it take to operate on the data
HTTP works over TCP, which generally guarantees delivery, so even if it will take a long time, your server should be getting all the incoming information.
That said, if you still want to make the process multi-threaded, I would recommend the following design:
One thread like you have right now (LISTENER THREAD), that accepts incoming data.
Another set of threads that will process the incoming data (WORKER THREADS).
The listener thread will only receive the data and place it in a queue.
The worker threads will dequeue the queue and operate on the data.
Several notes and things to think about, though:
Take care of thread synchronization - specifically, you need to protect the queue.
Think if it matters which worker thread will get the data. If there are several chunks that need to be taken care of a specific worker thread, you'll need to address this problem.
In some cases, if there is a very high load on the listener thread, the queue may become a bottleneck, or more precisely - the locking on the queue may become a bottleneck. In this case I would recommend moving into a model of N queues for N worker threads, and have the listener just pick one in a round-robin fashion. This will minimize the locks and actually since you'll have one reader and one writer you can even get away without a lock (but this is out of scope for that answer).
Yet another option would be to use a thread pool. A thread pool is a pool of threads that are hibernating until they are needed. When the listener gets an incoming input it will allocate it to a free thread, or will enlarge the pool if needed; this way you don't have a queue, and your threads are optimally used.
Simplest Embedded Web Server Ever with HttpListener may help you get started.

Nonblocking Tcp server

It's not a question really, i'm just looking for some guidelines :)
I'm currently writing some abstract tcp server which should use as low number of threads as it can.
Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket.
So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem):
List<Socket> readSockets = new List<Socket>();
List<Socket> writeSockets = new List<Socket>();
List<Socket> errorSockets = new List<Socket>();
while( true ){
Socket.Select( readSockets, writeSockets, errorSockets, 10 );
foreach( readSocket in readSockets ){
// do reading here
}
foreach( writeSocket in writeSockets ){
// do writing here
}
// POINT2 and here's the problem i will describe below
}
it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send->receive->disconnect routine it's not that painful, but if I try to keep alive doing send->receive->send->receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :(
Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?
Take a look at the TcpListener class first. It has a BeginAccept method that will not block, and will call one of your functions when someone connects.
Also take a look at the Socket class and its Begin methods. These work the same way. One of your functions (a callback function) is called whenever a certain event fires, then you get to handle that event. All the Begin methods are asynchronous, so they will not block and they shouldn't use 100% CPU either. Basically you want BeginReceive for reading and BeginSend for writing I believe.
You can find more on google by searching for these methods and async sockets tutorials. Here's how to implement a TCP client this way for example. It works basically the same way even for your server.
This way you don't need any infinite looping, it's all event-driven.
Are you creating a peer-to-peer application or a client server application? You got to consider how much data you are putting through the sockets as well.
Asynchronous BeginSend and BeginReceive is the way to go, you will need to implement the events but it's fast once you get it right.
Probably don't want to set your Send and Receive timeouts too high as well, but there should be a timeout so that if nothing is receive after a certain time, it will come out of the block and you can handle it there.
Microsoft has a nice async TCP server example. It takes a bit to wrap your head around it. It was a few hours of my own time before I was able to create the basic TCP framework for my own program based on this example.
http://msdn.microsoft.com/en-us/library/fx6588te.aspx
The program logic goes kind of like this. There is one thread that calls listener.BeginAccept and then blocks on allDone.WaitOne. The BeginAccept is an async call which gets offloaded to the threadpool and handled by the OS. When a new connection comes in, the OS calls the callback method passed in from BeginAccept. That method flips allDone to let the main listening thread to know it can listen once again. The callback method is just a transitionary method and continues on to call yet another async call to receive data.
The callback method supplied, ReadCallback, is the primary work "loop"(effectively recursive async calls) for the async calls. I use the term "loop" loosely because each method calls actually finishes, but not before calling the next async method. Effectively, you have a bunch of async calls all calling each other and you pass around your "state" object. This object is your own object and you can do whatever you want with it.
Every callback method will only get two things returned when the OS calls your method:
1) Socket Object representing the connection
2) State object with which you use for your logic
With your state object and socket object, you can effectively handle your "connections" asynchronously. The OS is VERY good at this.
Also, because your main loop blocks waiting for a connection to come it and off-loads those connections to the thread pool via async calls, it remains idle most of the time. The thread pool for your sockets is handled by the OS via completion ports, so they don't do any real work until data comes in. Very little CPU is used and it's effectively threaded via the thread pool.
P.S. From what I understand, you don't want to do any hard work with these methods, just handling the movement of the data. Since the thread pool is the pool for your Network IO and is shared by other programs, you should offload any hard work via threads/tasks/async as to not cause the socket thread pool to get bogged down.
P.P.S. I haven't found a way of closing the listening connection other than just disposing "listener". Because the async call for beginListen is called, that method will never return until a connection comes in, which means, I can't tell it to stop until it returns. I think I'll post a question on MSDN about it. and link if I get a good response.
Everything is fine is your code exept timeout value. You set it to 10 microseconds (10*10^-6) so your while routine iterates very often. You should set and adequate value (10 seconds for example) and your code will not eat 100% CPU.
List<Socket> readSockets = new List<Socket>();
List<Socket> writeSockets = new List<Socket>();
List<Socket> errorSockets = new List<Socket>();
while( true ){
Socket.Select( readSockets, writeSockets, errorSockets, 10*1000*1000 );
foreach( readSocket in readSockets ){
// do reading here
}
foreach( writeSocket in writeSockets ){
// do writing here
}
// POINT2 and here's the problem i will describe below
}

Categories

Resources