I need some help in choosing the right tool. I'm replacing the hardware controller that controls some pumps with a raspberry pi and writing code for it in c# .netcore. The pumps should run in a specific sequence and for a specified duration. With all the possible ways to accomplish this, I'm looking for the cleanest and interesting one.
The pumps should do the following:
Turn on pump 1
wait 15 seconds
turn on pump 2
wait 10 minutes
turn on pump 3
let pump 3 run for 20 minutes
turn off pump 3
wait 10 minutes
turn off pump 2
wait 15 seconds
turn off pump 1
I looked into timers, threads, tasks, state machine but I have a hard time picking the right tool for this job. At all times, I also need to be able to stop immediately all pumps.
Thanks for your help.
I'd probably go with tasks.
public async Task Execute()
{
await TurnOnPump1();
await Task.Delay(TimeSpan.FromSeconds(15));
await TurnOnPump2();
await Task.Delay(TimeSpan.FromMinutes(10));
await TurnOnPump3();
//And so on..
}
To expand on the great answer from Magnus, here's how you could implement cancellation so you could stop executing the method (stop starting new pumps) if you decide to stop all of them.
I posted this answer because OP specifically said that they need to be able to stop the pumps at all times, so Magnus' answer wouldn't quite work in certain scenarios.
At all times, I also need to be able to stop immediately all pumps.
public async Task StartAll(CancellationToken ct)
{
await TurnOnPump1(); // no ct here because these methods should take little to no time to execute
await Task.Delay(TimeSpan.FromSeconds(15), ct);
await TurnOnPump2();
await Task.Delay(TimeSpan.FromMinutes(10), ct);
await TurnOnPump3();
//And so on..
}
public asnyc Task StopAll()
{
// Your_CancellationTokenSource should be defined somewhere else
Your_CancellationTokenSource.Cancel(); // this line makes Task.Delay throw a TaskCanceledException
await StopPump1();
await StopPump2();
await StopPump3();
// ..
}
public async Task HowToCallStart()
{
try
{
// Your_CancellationTokenSource should be defined somewhere else
await StartAll(Your_CancellationTokenSource.Token);
}
catch (TaskCanceledException)
{
// Starting was canceled
}
}
This way, StopAll can be called anytime during the starting and you don't get any issues.
A few things to mention:
Your_CancellationTokenSource should of course be some variable outside of these methods so it can be shared. It needs to be of type CancellationTokenSource.
As you can see by the comment (both in code and below answer), I assumed that starting a pump would be very fast and take very little to no time. That is the reason I did not pass in my CancellationToken.
If turning on the pumps takes some time, consider using CancellationToken inside the TurnOnPumpX methods as well to abort if the operation was canceled. If you do so, you can simply pass in ct to those methods as well.
You should add some code in the catch for when the operation is canceled. At least print out a debug message if the end-user doesn't need to see it.
Related
I already have some experience in working with threads in Windows but most of that experience comes from using Win32 API functions in C/C++ applications. When it comes to .NET applications however, I am often not sure about how to properly deal with multithreading. There are threads, tasks, the TPL and all sorts of other things I can use for multithreading but I never know when to use which of those options.
I am currently working on a C# based Windows service which needs to periodically validate different groups of data from different data sources. Implementing the validation itself is not really an issue for me but I am unsure about how to handle all of the validations running simultaneously.
I need a solution for this which allows me to do all of the following things:
Run the validations at different (predefined) intervals.
Control all of the different validations from one place so I can pause and/or stop them if necessary, for example when a user stops or restarts the service.
Use the system ressources as efficiently as possible to avoid performance issues.
So far I've only had one similar project before where I simply used Thread objects combined with a ManualResetEvent and a Thread.Join call with a timeout to notify the threads about when the service is stopped. The logic inside those threads to do something periodically then looked like this:
while (!shutdownEvent.WaitOne(0))
{
if (DateTime.Now > nextExecutionTime)
{
// Do something
nextExecutionTime = nextExecutionTime.AddMinutes(interval);
}
Thread.Sleep(1000);
}
While this did work as expected, I've often heard that using threads directly like this is considered "oldschool" or even a bad practice. I also think that this solution does not use threads very efficiently as they are just sleeping most of the time. How can I achive something like this in a more modern and efficient way?
If this question is too vague or opinion-based then please let me know and I will try my best to make it as specific as possible.
Question feels a bit broad but we can use the provided code and try to improve it.
Indeed the problem with the existing code is that for the majority of the time it holds thread blocked while doing nothing useful (sleeping). Also thread wakes up every second only to check the interval and in most cases go to sleep again since it's not validation time yet. Why it does that? Because if you will sleep for longer period - you might block for a long time when you signal shutdownEvent and then join a thread. Thread.Sleep doesn't provide a way to be interrupted on request.
To solve both problems we can use:
Cooperative cancellation mechanism in form of CancellationTokenSource + CancellationToken.
Task.Delay instead of Thread.Sleep.
For example:
async Task ValidationLoop(CancellationToken ct) {
while (!ct.IsCancellationRequested) {
try {
var now = DateTime.Now;
if (now >= _nextExecutionTime) {
// do something
_nextExecutionTime = _nextExecutionTime.AddMinutes(1);
}
var waitFor = _nextExecutionTime - now;
if (waitFor.Ticks > 0) {
await Task.Delay(waitFor, ct);
}
}
catch (OperationCanceledException) {
// expected, just exit
// otherwise, let it go and handle cancelled task
// at the caller of this method (returned task will be cancelled).
return;
}
catch (Exception) {
// either have global exception handler here
// or expect the task returned by this method to fail
// and handle this condition at the caller
}
}
}
Now we do not hold a thread any more, because await Task.Delay doesn't do this. Instead, after specificed time interval it will execute the subsequent code on a free thread pool thread (it's more complicated that this but we won't go into details here).
We also don't need to wake up every second for no reason, because Task.Delay accepts cancellation token as a parameter. When that token is signalled - Task.Delay will be immediately interrupted with exception, which we expect and break from the validation loop.
To stop the provided loop you need to use CancellationTokenSource:
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
And you pass its _cts.Token token into the provided method. Then when you want to signal the token, just do:
_cts.Cancel();
To futher improve the resource management - IF your validation code uses any IO operations (reads files from disk, network, database access etc) - use Async versions of said operations. Then also while performing IO you will hold no unnecessary threads blocked waiting.
Now you don't need to manage threads yourself anymore and instead you operatate in terms of tasks you need to perform, letting framework \ OS manage threads for you.
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
Subject<bool> starter = new Subject<bool>();
IObservable<Unit> query =
starter
.StartWith(true)
.Select(x => x
? Observable.Interval(TimeSpan.FromSeconds(5.0)).SelectMany(y => Observable.Start(() => Validation()))
: Observable.Never<Unit>())
.Switch();
IDisposable subscription = query.Subscribe();
That fires off the Validation() method every 5.0 seconds.
When you need to pause and resume, do this:
starter.OnNext(false);
// Now paused
starter.OnNext(true);
// Now restarted.
When you want to stop it all call subscription.Dispose().
I have a method that sends request to my server every second.
public async Task StartRequestProcess(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCanecllationRequested)
{
var result = await GetDataFromServerAsync();
await Task.Delay(1000, stoppingToken);
}
}
But my GetDataFromServerAsync() method takes 10 or 15 seconds sometimes.
What does in this time (10 or 15 seconds)?
Will process wait until complete long requests? Or will new requests send every second without wait?
I have a method that sends a request to my server every second.
No, you do not. Please do not open questions with false statements; it makes them hard to answer!
Your workflow is:
Request data
Wait for the data to arrive -- that's what asynchronously wait means. await is asynchronously wait.
Once the data has arrived, pause for a second, again, asynchronously
Repeat.
That is NOT the workflow "send a request every second". That is the workflow "send a request one second after the last request succeeded".
What does in this time (10 or 15 seconds)?
Asynchronously waits. You said to asynchronously wait until the data was available, and that's what it does. During the asynchronous wait other tasks can be scheduled to execute on the thread.
Will the workflow wait until the long request is completed?
Yes. It will wait asynchronously. Again, that's what await means. It means asynchronously wait.
Will new requests send every second without wait?
No. You said to wait until the data was received and then pause for one second, so that's what happens.
The whole idea of TAP (task Async pattern) is that a single thread can service lots of things "simultaneously" because it can go back to what it was doing before, any time that an await is in progress. This is why async marking on methods tends to be on every method right the way down in a hierarchy from the first point that the code you write (your controller Get method for example) through every method you call, right the way to where you need to wait something like db or network IO.
Encountering await is a bit like throwing an uncaught exception- control flow goes right the way back up the entire stack of methods that is your code, and out of the top, back to whatever was going on before, outside of your code. The difference between a thrown exception and an awaiting state machine is that when the task being awaited is done, the thread that went off to do other things will come back to where the await is and continue on from there
What it was doing before is highly contextual - in your case it's probably "waiting for a TCP client to connect and send some data"
Now, in your your code the thread goes back to what it was doing before- you say it takes 15 seconds so the thread will busy itself with other things for 15 seconds, then it will come back and wait for your 1000ms task to complete, then it will loop round and issue another request. In practice what this means is that every 16 seconds your code will make a request; not the request every second that you were hoping for. Use a timer
When you call await GetDataFromServerAsync(), execution of your method will resume once the asynchronous operation finishes, e.g., after 10 to 15 seconds. Only then will you wait asynchronously for another second.
I'm doing some tests with the new Background tasks with hosted services in ASP.NET Core feature present in version 2.1, more specifically with Queued background tasks, and a question about parallelism came to my mind.
I'm currently following strictly the tutorial provided by Microsoft and when trying to simulate a workload with several requests being made from a same user to enqueue tasks I noticed that all workItems are executed in order, so no parallelism.
My question is, is this behavior expected? And if so, in order to make the request execution parallel is it ok to fire and forget, instead of waiting the workItem to complete?
I've searched for a couple of days about this specific scenario without luck, so if anyone has any guide or examples to provide, I would be really glad.
Edit: The code from the tutorial is quite long, so the link for it is https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1#queued-background-tasks
The method which executes the work item is this:
public class QueuedHostedService : IHostedService
{
...
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
_backgroundTask = Task.Run(BackgroundProceessing);
return Task.CompletedTask;
}
private async Task BackgroundProceessing()
{
while (!_shutdown.IsCancellationRequested)
{
var workItem =
await TaskQueue.DequeueAsync(_shutdown.Token);
try
{
await workItem(_shutdown.Token);
}
catch (Exception ex)
{
_logger.LogError(ex,
$"Error occurred executing {nameof(workItem)}.");
}
}
}
...
}
The main point of the question is to know if anyone out there could share the knowledge of how to use this specific technology to execute several work items at the same time, since a server can handle this workload.
I tried the fire and forget method when executing the work item and it worked the way I intended it to, several tasks executing in parallel at the same time, I 'm jut no sure if this is an ok practice, or if there is a better or proper way of handling this situation.
The code you posted executes the queued items in order, one at a time but also in parallel to the web server. An IHostedService is running per definition in parallel to the web server. This article provides a good overview.
Consider the following example:
_logger.LogInformation ("Before()");
for (var i = 0; i < 10; i++)
{
var j = i;
_backgroundTaskQueue.QueueBackgroundWorkItem (async token =>
{
var random = new Random();
await Task.Delay (random.Next (50, 1000), token);
_logger.LogInformation ($"Event {j}");
});
}
_logger.LogInformation ("After()");
We add ten tasks which will wait a random amount of time. If you put the code in a controller method the events will still be logged even after controller method returns. But each item will be executed in order so that the output looks like this:
Event 1
Event 2
...
Event 9
Event 10
In order to introduce parallelism we have to change the implementation of the BackgroundProceessing method in the QueuedHostedService.
Here is an example implementation that allows two Tasks to be executed in parallel:
private async Task BackgroundProceessing()
{
var semaphore = new SemaphoreSlim (2);
void HandleTask(Task task)
{
semaphore.Release();
}
while (!_shutdown.IsCancellationRequested)
{
await semaphore.WaitAsync();
var item = await TaskQueue.DequeueAsync(_shutdown.Token);
var task = item (_shutdown.Token);
task.ContinueWith (HandleTask);
}
}
Using this implementation the order of the events logged in no longer in order as each task waits a random amount of time. So the output could be:
Event 0
Event 1
Event 2
Event 3
Event 4
Event 5
Event 7
Event 6
Event 9
Event 8
edit: Is it ok in a production environment to execute code this way, without awaiting it?
I think the reason why most devs have a problem with fire-and-forget is that it is often misused.
When you execute a Task using fire-and-forget you are basically telling me that you do not care about the result of this function. You do not care if it exits successfully, if it is canceled or if it threw an exception. But for most Tasks you do care about the result.
You do want to make sure a database write went through
You do want to make sure a Log entry is written to the hard drive
You do want to make sure a network packet is sent to the receiver
And if you care about the result of the Task then fire-and-forget is the wrong method.
That's it in my opinion. The hard part is finding a Task where you really do not care about the result of the Task.
You can add the QueuedHostedService once or twice for every CPU in the machine.
So something like this:
for (var i=0;i<Environment.ProcessorCount;++i)
{
services.AddHostedService<QueuedHostedService>();
}
You can hide this in an extension method and make the concurrency level configurable to keep things clean.
I have a couple of hundred devices and I need to check their status every 5 seconds.
The API I'm using contains a blocking function that calls a dll and returns a status of a single device
string status = ReadStatus(int deviceID); // waits here until the status is returned
The above function usually returns the status in a couple of ms, but there will be situations where I might not get the status back for a second or more! Or even worse, one device might not respond at all.
I therefore need to introduce a form of asynchronicity to make sure that one device that doesn't respond doesn't impend all the others being monitored.
My current approach is as following
// triggers every 5 sec
public MonitorDevices_ElapsedInterval(object sender, ElapsedEventArgs elapsedEventArgs)
{
foreach (var device in lstDevices) // several hundred devices in the list
{
var task = device.ReadStatusAsync(device.ID, cts.Token);
tasks.Add(task);
}
// await all tasks finished, or timeout after 4900ms
await Task.WhenAny(Task.WhenAll(tasks), Task.Delay(4900, cts.Token));
cts.Cancel();
var devicesThatResponded = tasks.Where(t => t.Status == TaskStatus.RanToCompletion)
.Select(t => t.GetAwaiter().GetResult())
.ToList();
}
And below in the Device class
public async Task ReadStatusAsync(int deviceID, CancellationToken tk)
{
await Task.Delay(50, tk);
// calls the dll to return the status. Blocks until the status is return
Status = ReadStatus(deviceID);
}
I'm having several problems with my code
the foreach loops fires a couple of hundred tasks simultaneously, with the callback from the Task.Delay being served by a thread from the thread pool, each task taking a couple of ms.
I see this as a big potential bottleneck. Are there any better approaches?
This might be similar to what Stephen Cleary commented here, but he didn't provide an alternative What it costs to use Task.Delay()?
In case ReadStatus fails to return, I'm trying to use a cancellation token to cancel the thread that sits there waiting for the response... This doesn't seem to work.
await Task.Delay(50, tk)
Thread.Sleep(100000) // simulate the device not responding
I still have about 20 Worker Threads alive (even though I was expecting cts.Cancel() to kill them.
the foreach loops fires a couple of hundred tasks simultaneously
Since ReadStatus is synchronous (I'm assuming you can't change this), and since each one needs to be independent because they can block the calling thread, then you have to have hundreds of tasks. That's already the most efficient way.
Are there any better approaches?
If each device should be read every 5 seconds, then each device having its own timer would probably be better. After a few cycles, they should "even out".
await Task.Delay(50, tk);
I do not recommend using Task.Delay to "trampoline" non-async code. If you wish to run code on the thread pool, just wrap it in a Task.Run:
foreach (var device in lstDevices) // several hundred devices in the list
{
var task = Task.Run(() => device.ReadStatus(device.ID, cts.Token));
tasks.Add(task);
}
I'm trying to use a cancellation token to cancel the thread that sit there waiting for the response... This doesn't seem to work.
Cancellation tokens do not kill threads. If ReadStatus observes its cancellation token, then it should cancel; if not, then there isn't much you can do about it.
Thread pool threads should not be terminated; this reduces thread churn when the timer next fires.
As you can see in this Microsoft example page of a cancellation token, the doWork method is checking for cancellation on each loop. So, the loop has to start again to cancel out. In your case, when you simulate a long task, it never checks for cancellation at all when it's running.
From How do I cancel non-cancelable async operations?, it's saying at the end : "So, can you cancel non-cancelable operations? No. Can you cancel waits on non-cancelable operations? Sureā¦ just be very careful when you do.". So it answers that we can't cancel it out.
What I would suggest is to use threads with a ThreadPool, you take the starting time of each one and you have an higher priority thread that looks if others bypass their maximum allowed time. If so, Thread.Interrupt().
how do set a timeout for a busy method +C#.
Ok, here's the real answer.
...
void LongRunningMethod(object monitorSync)
{
//do stuff
lock (monitorSync) {
Monitor.Pulse(monitorSync);
}
}
void ImpatientMethod() {
Action<object> longMethod = LongRunningMethod;
object monitorSync = new object();
bool timedOut;
lock (monitorSync) {
longMethod.BeginInvoke(monitorSync, null, null);
timedOut = !Monitor.Wait(monitorSync, TimeSpan.FromSeconds(30)); // waiting 30 secs
}
if (timedOut) {
// it timed out.
}
}
...
This combines two of the most fun parts of using C#. First off, to call the method asynchronously, use a delegate which has the fancy-pants BeginInvoke magic.
Then, use a monitor to send a message from the LongRunningMethod back to the ImpatientMethod to let it know when it's done, or if it hasn't heard from it in a certain amount of time, just give up on it.
(p.s.- Just kidding about this being the real answer. I know there are 2^9303 ways to skin a cat. Especially in .Net)
You can not do that, unless you change the method.
There are two ways:
The method is built in such a way that it itself measures how long it has been running, and then returns prematurely if it exceeds some threshold.
The method is built in such a way that it monitors a variable/event that says "when this variable is set, please exit", and then you have another thread measure the time spent in the first method, and then set that variable when the time elapsed has exceeded some threshold.
The most obvious, but unfortunately wrong, answer you can get here is "Just run the method in a thread and use Thread.Abort when it has ran for too long".
The only correct way is for the method to cooperate in such a way that it will do a clean exit when it has been running too long.
There's also a third way, where you execute the method on a separate thread, but after waiting for it to finish, and it takes too long to do that, you simply say "I am not going to wait for it to finish, but just discard it". In this case, the method will still run, and eventually finish, but that other thread that was waiting for it will simply give up.
Think of the third way as calling someone and asking them to search their house for that book you lent them, and after you waiting on your end of the phone for 5 minutes you simply say "aw, chuck it", and hang up. Eventually that other person will find the book and get back to the phone, only to notice that you no longer care for the result.
This is an old question but it has a simpler solution now that was not available then: Tasks!
Here is a sample code:
var task = Task.Run(() => LongRunningMethod());//you can pass parameters to the method as well
if (task.Wait(TimeSpan.FromSeconds(30)))
return task.Result; //the method returns elegantly
else
throw new TimeoutException();//the method timed-out
While MojoFilter's answer is nice it can lead to leaks if the "LongMethod" freezes. You should ABORT the operation if you're not interested in the result anymore.
public void LongMethod()
{
//do stuff
}
public void ImpatientMethod()
{
Action longMethod = LongMethod; //use Func if you need a return value
ManualResetEvent mre = new ManualResetEvent(false);
Thread actionThread = new Thread(new ThreadStart(() =>
{
var iar = longMethod.BeginInvoke(null, null);
longMethod.EndInvoke(iar); //always call endinvoke
mre.Set();
}));
actionThread.Start();
mre.WaitOne(30000); // waiting 30 secs (or less)
if (actionThread.IsAlive) actionThread.Abort();
}
You can run the method in a separate thread, and monitor it and force it to exit if it works too long. A good way, if you can call it as such, would be to develop an attribute for the method in Post Sharp so the watching code isn't littering your application.
I've written the following as sample code(note the sample code part, it works, but could suffer issues from multithreading, or if the method in question captures the ThreadAbortException would break it):
static void ActualMethodWrapper(Action method, Action callBackMethod)
{
try
{
method.Invoke();
} catch (ThreadAbortException)
{
Console.WriteLine("Method aborted early");
} finally
{
callBackMethod.Invoke();
}
}
static void CallTimedOutMethod(Action method, Action callBackMethod, int milliseconds)
{
new Thread(new ThreadStart(() =>
{
Thread actionThread = new Thread(new ThreadStart(() =>
{
ActualMethodWrapper(method, callBackMethod);
}));
actionThread.Start();
Thread.Sleep(milliseconds);
if (actionThread.IsAlive) actionThread.Abort();
})).Start();
}
With the following invocation:
CallTimedOutMethod(() =>
{
Console.WriteLine("In method");
Thread.Sleep(2000);
Console.WriteLine("Method done");
}, () =>
{
Console.WriteLine("In CallBackMethod");
}, 1000);
I need to work on my code readability.
Methods don't have timeouts in C#, unless your in the debugger or the OS believes your app has 'hung'. Even then processing still continues and as long as you don't kill the application a response is returned and the app continues to work.
Calls to databases can have timeouts.
Could you create an Asynchronous Method so that you can continue doing other stuff whilst the "busy" method completes?
I regularly write apps where I have to synchronize time critical tasks across platforms. If you can avoid thread.abort you should. See http://blogs.msdn.com/b/ericlippert/archive/2010/02/22/should-i-specify-a-timeout.aspx and http://www.interact-sw.co.uk/iangblog/2004/11/12/cancellation for guidelines on when thread.abort is appropriate. Here are the concept I implement:
Selective execution: Only run if a reasonable chance of success exists (based on ability to meet timeout or likelihood of success result relative to other queued items). If you break code into segments and know roughly the expected time between task chunks, you can predict if you should skip any further processing. Total time can be measured by wrapping an object bin tasks with a recursive function for time calculation or by having a controller class that watches workers to know expected wait times.
Selective orphaning: Only wait for return if reasonable chance of success exists. Indexed tasks are run in a managed queue. Tasks that exceed their timeout or risk causing other timeouts are orphaned and a null record is returned in their stead. Longer running tasks can be wrapped in async calls. See example async call wrapper: http://www.vbusers.com/codecsharp/codeget.asp?ThreadID=67&PostID=1
Conditional selection: Similar to selective execution but based on group instead of individual task. If many of your tasks are interconnected such that one success or fail renders additional processing irrelevant, create a flag that is checked before execution begins and again before long running sub-tasks begin. This is especially useful when you are using parallel.for or other such queued concurrency tasks.