I have a windows service that does long operations every X minutes using System.Timers.Timer.
I need to kwnow if there is a way to figure out, in the next loop, if the previous execution is still running.
Is there a way to know how much CPU memory the previous thread using?
A simple solution would be to use a shared, volatile bool that is set in the start of the method, and reset at the end. Or use Interlocked.Increment/decrement of a shared counter to keep a running total of the number of threads in your method.
If you do not want invocations to overlap another, one solution would be to use a Threading.Timer, set the due-time you want and a infinite period. At the end of each event you call .Change to reset the due-time. Another way to do essentially the same thing would be a loop over Task.Delay:
while(!cancellationToke.IsCancellationRequested){
// do work
await Task.Delay(periodTime);
}
Your question about CPU/Memory does not make that much sense. A thread is either running, waiting to run, or blocked. And the only memory that can be directly attributed to the thread would be the stack for that thread, and that is usually quite small, and has a 1Mb limit by default if I'm remembering correctly. If you want to measure how much time a thread is in the running state, you need to do periodic sampling or instrument the scheduler.
Related
Context: we have a task which might take from 30 seconds to 5 minutes depending on a service we are consuming in some Azure Functions.
We are planning to monitor the current status of that task object to make sure it's running and has not been cancelled/faulted.
There are two ways to go around it:
Create a Task, run it and then cancel it when the main task is finished. Alternatively, maybe use Task.Delay along with a while with a condition.
Create a Thread, run it and wait for it to finish (with a while condition to avoid a while that runs forever).
We have done some research and have realised that both have pros and cons. But we are still not sure about which one would be the best approach and why.
In a similar scenario, what would you use? A task, a thread, or something else?
Using a thread is a bit wasteful, but slightly more reliable.
It is wasteful because each thread allocates 1 MB of memory just for its mere existence.
It is more reliable because it doesn't depend on the availability of ThreadPool threads for running a timer event. A sudden burst in demand for ThreadPool threads could leave the ThreadPool starved for several seconds, or even minutes (in extreme scenarios).
So if wasting 1 MB of memory is a non-issue for the app, use a thread. On the other hand if the absolute precision in the timing of the events is something unimportant, use a task.
You could also use a task started with the option LongRunning, but this is essentially a thread in disguise.
I am developing a Windows Service application, in .NET, which executes many functions (it is a WCF service host), and one of the targets is running scheduled tasks.
I chose to create a System.Threading.Timer for every operation, with a dueTime set to the next execution and no period to avoid reentrancy.
Every time the operation ends, it changes the dueTime to match the next scheduled execution.
Most of the operations are scheduled to run every minute, not all toghether but delayed by some seconds each other.
Now, after adding a number of operations, about 30, it seems that the timers start to be inaccurate, starting the operations many seconds late, or even minutes late.
I am running the operation logic directly in the callback method of the timer, so the running thread should be the same as the timer.
Should I create a Task to run the operation instead of running it in the callback method to improve accuracy?
Or should I use a single timer with a fixed (1 second) dueTime to check which operations need to be started?
I don't like this last option because it would be more difficult to handle reentrancy..
Timers fire on a thread pool thread, so you are probably finding that as you add lots of timers that you are exhausting the thread pool.
You could increase the size of the thread pool, or alternatively ensure you have fewer timers than the thread pool size.
Firing off Tasks from the callback likely won't help - since you are going to be fighting for threads from the same thread pool. Unless you use long-running tasks.
We usually setup multiple timers to handle different actions within a single service. We set the intervals and start, stop the timer on the Service Start/Stop/Shutdown events (and have a variable indicating the status for each one, i.e. bool Stopped)
When the timer ticks over, we stop the timer, run the processing (which may take a while depending on the process, i.e. may take longer than the interval if its short.. (this code needs to be in a try--catch so it keeps going on errors)
After the code has processed, we check the Stopped variable and if its not stopped we start the timer again (this handles the reentrancy that you've mentioned and allows the code to stick to the interval as much as possible)
Timers are generally more accurate after about 100ms as far as I know, but should be close enough for what you want to do.
We have run this concept for years, and it hasn't let us down.
If you running these tasks as a sub-system of an ASP.NET app, you should also look at HangFire, which can handle background processing, eliminating the need for the windows service.
How accurate do the timers need to be? you could always use a single timer and run multiple processing threads at the same time? or queue the calls to some operations if less critical.
Ok, I came to a decision: since I am not able to easily reproduce the behavior, I chose to solve the root problem and use the Service process to only:
serve WCF requests done by clients
schedule operations (which was problematic)
Every operation that could eat CPU is executed by another process, which is controlled directly by the main process (with System.Diagnostics.Process and its events) and communicates with it through WCF.
When I start the secondary process, I pass to it the PID of the main process through command line. If the latter gets killed, the Process.Exited event fires, and I can close the child process too.
This way the main service usually doesn't use much CPU time, and is free to schedule happily without delays.
Thanks to all who gave me some advices!
By default, the CLR runs tasks on pooled threads, which is ideal for
short-running compute-bound work. For longer-running and blocking
operations, you can prevent use of a pooled thread as follows:
Task task = Task.Factory.StartNew (() => ...,
TaskCreationOptions.LongRunning);
I am reading topic about thread and task. Can you explain to me what are "long[er]-running" and "short-running" tasks?
In general thread pooling, you distinguish short-running and long-running threads based on the comparison between their start-up time and run time.
Threads generally take some time to be created and get up to the point where they can start running your code.
The means that if you run a large number of threads where they each take a minute to start but only run for a second (not accurate times but the intent here is simply to show the relationship), the run time of each will be swamped by the time taken to get them going in the first place.
That's one of the reasons for using a thread pool: the threads aren't terminated once their work is done. Instead, they hang around to be reused so that the start-up time isn't incurred again.
So, in that sense, a long running thread is one whose run time is far greater than the time required to start it. In that case, the start-up time is far less important than it is for short running threads.
Conversely, short running threads are ones whose run time is less than or comparable to the start-up time.
For .NET specifically, it's a little different in operation. The thread pooling code will, once it's reached the minimum number of threads, attempt to limit thread creation to one per half-second.
Hence, if you know your thread is going to be long running, you should notify the scheduler so that it can adjust itself accordingly. This will probably mean just creating a new thread rather than grabbing one from the pool, so that the pool can be left to service short-running tasks as intended (no guarantees on that behaviour but it would make sense to do it that way).
However, that doesn't change the meaning of long-running and short-running, all it means is that there's some threshold at which it makes sense to distinguish between the two. For .NET, I would suggest the half-second figure would be a decent choice.
I have 20 threads running at a time in my program, (create 20 wait for them to finish, start another 20), after a while my program slows way down. Do I need to free the tasks or do anything special? If so how, if not is there a common reason why a program like this would slow down?
You might want to consider using the ThreadPool, either directly, or via the Task Parallel Library (my preferred option). This is likely a better, simpler, and cleaner design than spawning your own threads and blocking on them repeatedly.
That being said, if your program is getting progressively slower, this is something where a profiler can help dramatically. Without seeing code, it's very difficult to diagnose. For example, depending on the work that you're doing, you may be causing the GC to become less efficient over time, which could cause the % of time spent in GC to climb as the program continues its execution. Profiling should give you a good indication of what is taking time as your program executes.
Reed's answer is probably the best way to deal with your issue; however, if you do want to manage the threads yourself, and not use the ThreadPool or TPL, I'd have to ask why you would let 20 threads die and create 20 more. Creating threads is an expensive process, which is why the thread pool exists. If you continually have the same number of parallel tasks, or a maximum number, they should be created once and reused. You can use locking constructs such as semaphore and mutex and have the threads wait when they are done, and just give them new data to work with and release them to proceed again. Waiting on a lock is a very inexpensive operation -- orders of magnitude cheaper than recreating a thread.
So for example, a thread might look like this (pseudocode):
while (program_not_ending)
{
wait_for_new_data_release; // Wait on thread's personal mutex
process_new_data;
resignal_my_mutex; // Cause the beginning of loop to wait again
release_semaphore_saying_I_am_done; // Increment parent semaphore count
}
The parent would then wait for its semaphore to fill up that 20 threads completed, reset the data buckets, and clear all of the thread mutexes.
I am working on a project in C#.NET using the .NET framework version 3.5.
My project has a class called Focuser.cs which represents a physical device, a telescope focuser, that can communicate with a PC via a serial (RS-232) port. My class (Focuser) has properties such as CurrentPosition, CurrentTemperature, ect which represents the current conditions of the focuser which can change at any time. So, my Focuser class needs to continually poll the device for these values and update its internal fields. My question is, what is the best way to perform this continual polling sequence? Occasionally, the user will need to switch the device into a different mode which will require the ability to stop the polling, perform some action, and then resume polling.
My first attempt was to use a time that ticks every 500ms and then calls up a background worker which polls for one position and one temperature then returns. When the timer ticks if the background worker isBusy then it just returns and tries again 500ms later. Someone suggested that I get rid of the background worker all together and just do the poll in the timer tick event. So I set the AutoReset property of the timer to false and then just restart the timer every time a poll finishes. These two techniques seemed to behave the exact same way in my application so I am not sure if one is better than the other. I also tried creating a new thread every time I want to do a poll operation using a new ThreadStart and all that. This also seemed to work fine.
I should mention one other thing. This class is part of a COM object server which basically means that the class library that is produced will be called upon via COM. I am not sure if this has any influence on the answer but I just thought I should throw it out there.
The reason I am asking all of this is that all of my test harness runs and debug builds work just fine but when I do a release build and try to make calls to my class from another application, that application freezes up and I am having a hard time determining the cause.
Any advice, suggestions, comments would be appreciated.
Thanks, Jordan
Remember that the timer hides its own background worker thread, which basically sleeps for the interval, then fires its Elapsed event. Knowing that, it makes sense just to put the polling in Elapsed. This would be the best practice IMO, rather than starting a thread from a thread. You can start and stop Timers as well, so the code that switches modes can Stop() the Timer, perform the task, then Start() it again, and the Timer doesn't even have to know the telescope IsBusy.
However, what I WOULD keep track of is whether another instance of the Elapsed event handler is still running. You could lock the Elapsed handler's code, or you could set a flag, visible from any thread, that indicates another Elapsed() event handler is still working; Elapsed event handlers that see this flag set can exit immediately, avoiding concurrency problems working with the serial port.
So it looks like you have looked at 2 options:
Timer. The Timer is non-blocking while waiting (uses another thread), so the rest of the program can continue running and be responsive. When the timer event kicks off, you simply get/update the current values.
Timer + BackgroundWorker. The background worker is also simply a separate thread. It may take longer to actually start the thread than to simply get the current values. Unless it takes a long time to get the current values and causes your program to become unresponsive, this is unnecessary complexity.
If getting values is fast enough, stick to #1 for simplicity.
If getting values is slow, #2 will work but unnecessarily has a thread start a thread. Instead, do it with only a BackgroundWorker (no Timer). Create the BackgroundWorker once and store in a variable. No need to recreate it every time. Make sure to set WorkerSupportsCancellation to true. Whenever you want to start checking values, on your main program thread do bgWorker.RunWorkerAsync(). When you want to stop, do bgWorker.CancelAsync(). Inside your DoWork method, have a loop that checks the values and does a Thread.Sleep(500). Since it's a separate thread, it won't make your program unresponsive. In the loop conditions, also check to see if the polling was cancelled and break out. You'll probably need a way to get the values back to the main thread. You can use ReportProgress() if an integer is good enough. Otherwise you can create an object to hold the content, but make sure to lock (object) { } before reading and modifying it. This is a quick summary, but if you go this route I would recommend you read: http://www.albahari.com/threading/part3.aspx#_BackgroundWorker
Is the process of contacting the telescope and getting the current values actually take long enough to warrant polling? Have you tried dropping the multithreading and just blocking while you get the current value?
To answer your question, however, I would suggest not using a background worker but an actual Thread that updates the properties continuously.
If all these properties are read only (can you set the temp of the telescope?) and there are no dependencies between them (e.g., no transactions are required to update multiple properties at once) you can drop all the blocking code and let your thread update willy-nilly while other threads access the properties.
I suggest a real, dedicated Thread rather than the thread pool just because of a lack of knowledge of what might happen when mixing background threads and COM servers. Also, apartment state might play into this; with a Thread you can try STA but you can't do that with a threadpool thread.
You say the app freezes up in a release build?
To eliminate extra variables, I'd take all the timer/multi-threaded code out of the application(just comment it out), and try it with a straightforward blocking method.
i.e. You click a button, it calls a function, that function hits the COM object for data, and then updates the UI. All in a blocking, synchronous fashion. This will tell you for sure whether it's the multi-threading code that's freezing you up, or if it's the COM interaction itself.
How about starting a background thread with ThreadPool? Then enter a loop based on a bool (While (bContinue)) that loops and does your work and then a Thread.Sleep at the end of the loop - exiting the program would include setting bContinue to false so the thread stops - perhaps hook it up to the OnStop event in a windows service
bool bRet = ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadFunc));
private void ThreadFunc(object objState)
{
// enter loop
bContinue = true;
while (bContinue) {
// do stuff
// sleep
Thread.Sleep(m_iWaitTime_ms);
}
}