I'm looking at implementing a "Heartbeat" process to do a lot of repeated cleanup tasks throughout the day.
This seemed like a good chance to use the Command pattern, so I have an interface that looks like:
public interface ICommand
{
void Execute();
bool IsReady();
}
I've then created several tasks that I want to be run. Here is a basic example:
public class ProcessFilesCommand : ICommand
{
private int secondsDelay;
private DateTime? lastRunTime;
public ProcessFilesCommand(int secondsDelay)
{
this.secondsDelay = secondsDelay;
}
public void Execute()
{
Console.WriteLine("Processing Pending Files...");
Thread.Sleep(5000); // Simulate long running task
lastRunTime = DateTime.Now;
}
public bool IsReady()
{
if (lastRunTime == null) return true;
TimeSpan timeSinceLastRun = DateTime.Now.Subtract(lastRunTime.Value);
return (timeSinceLastRun.TotalSeconds > secondsDelay);
}
}
Finally, my console application runs in this loop looking for waiting tasks to add to the ThreadPool:
class Program
{
static void Main(string[] args)
{
bool running = true;
Queue<ICommand> taskList = new Queue<ICommand>();
taskList.Enqueue(new ProcessFilesCommand(60)); // 1 minute interval
taskList.Enqueue(new DeleteOrphanedFilesCommand(300)); // 5 minute interval
while (running)
{
ICommand currentTask = taskList.Dequeue();
if (currentTask.IsReady())
{
ThreadPool.QueueUserWorkItem(t => currentTask.Execute());
}
taskList.Enqueue(currentTask);
Thread.Sleep(100);
}
}
}
I don't have much experience with multi-threading beyond some work I did in Operating Systems class. However, as far as I can tell none of my threads are accessing any shared state so they should be fine.
Does this seem like an "OK" design for what I want to do? Is there anything you would change?
This is a great start. We've done a bunch of things like this recently so I can offer a few suggestions.
Don't use thread pool for long running tasks. The thread pool is designed to run lots of tiny little tasks. If you're doing long running tasks, use a separate thread. If you starve the thread pool (use up all the tasks), everything that gets queued up just waits for a threadpool thread to become available, significantly impacting the effective performance of the threadpool.
Have the Main() routine keep track of when things ran and how long till each runs next. Instead of each command saying "yes I'm ready" or "no I'm not" which will be the same for each command, just have LastRun and Interval fields which Main() can then use to determine when each command needs to run.
Don't use a Queue. While it may seem like a Queue type operation, since each command has it's own interval, it's really not a normal Queue. Instead put all the commands in a List and then sort the list by shortest time to next run. Sleep the thread until the first command is needed to run. Run that command. Resort the list by next command to run. Sleep. Repeat.
Don't use multiple threads. If each command's interval is a minute or few minutes, you probably don't need to use threads at all. You can simplify by doing everything on the same thread.
Error handling. This kind of thing needs extensive error handling to make sure a problem in one command doesn't make the whole loop fail, and so you can debug a problem when it occurs. You also may want to decide if a command should get immediately retried on error or wait until it's next scheduled run, or even delay it more than normal. You may also want to not log an error in a command if the error happens every time (an error in a command that runs often can easily create huge log files).
Instead of writing everything from scratch, you could choose to build your application using a framework that handles all of the scheduling and threading for you. The open-source library NCron is designed for exactly this purpose, and it is very easy to use.
Define your job like this:
class MyFirstJob : CronJob
{
public override void Execute()
{
// Put your logic here.
}
}
And create a main entry point for your application including scheduling setup like this:
class Program
{
static void Main(string[] args)
{
Bootstrap.Init(args, ServiceSetup);
}
static void ServiceSetup(SchedulingService service)
{
service.Hourly().Run<MyFirstJob>();
service.Daily().Run<MySecondJob>();
}
}
This is all the code you will need to write if you choose to go down this path. You also get the option to do more complex schedules or dependency injection if needed, and logging is included out-of-the-box.
Disclaimer: I am the lead programmer on NCron, so I might just be a tad biased! ;-)
I would make all your Command classes immutable to insure that you don't have to worry about changes to state.
Now a days 'Parallel Extensions' from microsoft should be the viable option to write concurrent code or doing any thread related tasks. It provides good abstraction on top of thread pool and system threads such that you need not to think in an imperative manner to get the task done.
In my opinion consider using it. By the way, your code is clean.
Thanks.
running variable will need to be marked as volatile if its state is going to be changed by another thread.
As to the suitability, why not just use a Timer?
Related
I have a program, that takes long time to initialize but it's execution is rather fast.
It's becoming a bottleneck, so I want to start multiple instances of this program (like a pool) having it already initialized, and the idea is to just pass the needed arguments for it's execution, saving all the initialization time.
The problem is that I only found howto start new processes passing arguments:
How to pass parameters to ThreadStart method in Thread?
but I would like to start the process normally and then be able to communicate with it to send each thread the needed paramenters required for it's execution.
The best aproach I found was to create multiple threads where I would initialize the program and then using some communication mechanism (named pipes for example as it's all running in the same machine) be able to pass those arguments and trigger the execution of the program (one of the triggers could break an infinite loop, for example).
I'm asking if anyone can advice a more optimal solution rather that the one I came up with.
I suggest you don't mess with direct Thread usage, and use the TPL, something like this:
foreach (var data in YOUR_INITIALIZATION_LOGIC_METHOD_HERE)
{
Task.Run(() => yourDelegate(data), //other params here);
}
More about Task.Run on MSDN, Stephen Cleary blog
Process != Thread
A thread lives inside a process, while a process is an entire program or service in your OS.
If you want to speed-up your app initialization you can still use threads, but nowadays we use them on top of Task Parallel Library using the Task Asynchronous Pattern.
In order to communicate tasks (usually threads), you might need to implement some kind of state machine (some kind of basic workflow) where you can detect when some task progress and perform actions based on task state (running, failed, completed...).
Anyway, you don't need named pipes or something like that to communicate tasks/threads as everything lives in the same parent process. That is, you need to use regular programming approaches to do so. I mean: use C# and thread synchronization mechanisms and some kind of in-app messaging.
Some very basic idea...
.NET has a List<T> collection class. You should design a coordinator class where you might add some list which receives a message class (designed by you) like this:
public enum OperationType { DataInitialization, Authentication, Caching }
public class Message
{
public OperationType Operation { get; set; }
public Task Task { get; set; }
}
And you start all parallel initialization tasks, you add everyone to a list in the coordinator class:
Coordinator.Messages.AddRange
(
new List<Message>
{
new Message
{
Operation = Operation.DataInitialization,
Task = dataInitTask
},
..., // <--- more messages
}
);
Once you've added all messages with pending initialization tasks, somewhere in your code you can wait until initialization ends asynchronously this way:
// You do a projection of each message to get an IEnumerable<Task>
// to give it as argument of Task.WhenAll
await Task.WhenAll(Coordinator.Messages.Select(message => message.Task));
While this line awaits to finish all initialization, your UI (i.e. the main thread) can continue to work and show some kind of loading animation or who knows what (whatever).
Perhaps you can go a step further, and don't wait for all but wait for a group of tasks which allow your users to start using your app, while other non-critical tasks end...
My program needs to constantly perform many repetitive calculations as fast as possible. There are many tasks running parallelly which cause CPU utilisation is at 100%. To let users slow down processing overload(a little under 100% of CPU, depending on hardware), I added
await Task.Delay(TimeSpan.FromMilliseconds(doubleProcessingCycleIntervalMilliseconds));
to heavy processing methods. This works perfect as far as value of doubleProcessingCycleIntervalMilliseconds is at least 1 ms.
For users who have high-end computers(calculations speed will take less than one millisecond), I wanted to add same option for delay but instead of milliseconds using ticks. So now code looks:
if (ProcessingCycleIntervalOptionsMilliseconds == true)
{
await Task.Delay(TimeSpan.FromMilliseconds(doubleProcessingCycleIntervalMilliseconds));
}
else
{
await Task.Delay(TimeSpan.FromTicks(longProcessingCycleIntervalTicks));
}
When walue of longProcessingCycleIntervalTicks is at least 10000 ticks(=1ms) program works perfect. Unfortunately when values go under 1ms(0 for doubleProcessingCycleIntervalMilliseconds which I can understand) or under 10000(i.e. 9999 for longProcessingCycleIntervalTicks) program becomes not responsive. So literally difference of 1 tick below 1ms hangs the program. I don't use MVVM. (Just in case: I checked Stopwatch.IsHighResolution gives true on the development computer)
Is it possible/correct to use
await Task.Delay(TimeSpan.FromTicks(longProcessingCycleIntervalTicks));
in .NET 4.5.1 ? If yes, then how to determine when user can use it?
Your intention is not to keep CPU utilization below 100%. Your intention is to keep the system responsive. Limiting CPU utilization is a misguided goal.
The way you do this is by using low priority threads. Use a custom task scheduler for your CPU bound tasks.
Timing in Windows has limited accuracy. Thread.Sleep cannot work with fractional milliseconds. .NET rounds them away before handing over to Sleep.
You might be better off looking at the way you are performing the tasks rather than trying to sleep them.
The best way I can think of is by using a task manager to manage each task independently (such as a background worker) and then thread collections of tasks.
This would enable you to manage how many tasks are running instead of trying to 'slow' them down..
i.e
public class Task<returnType>
{
public delegate returnType funcTask(params object[] args);
public delegate void returnCallback(returnType ret);
public funcTask myTask;
public event returnCallback Callback;
public Task(funcTask myTask, returnCallback Callback)
{
this.myTask = myTask;
this.Callback = Callback;
}
public void DoWork(params object[] args)
{
if (this.Callback != null)
{
this.Callback(myTask(args));
}
else
{
throw new Exception("no Callback!");
}
}
}
Then you need a manager that has a Queue in it of the tasks you want to complete, call myQueue.Enqueue to queue, myQueue.Dequeue to run the tasks. Basically you can use the already built-in Queue to do this.
You then can create a Queue of task managers full of tasks and have them all run asychronously, and stack nicely on the CPU as they are event driven and the OS and .NET will sort out the rest.
EDIT:
To continuously run tasks you will need to create a class that inherits the Queue class, then call an event when something is de-queued. The reasoning behind why I say to use events is that they stack on the CPU.
For a neverending stackable 'Loop' something like this would work...
public class TaskManager<T> : Queue<T>
{
public delegate void taskDequeued();
public event taskDequeued OnTaskDequeued;
public override T Dequeue()
{
T ret = base.Dequeue();
if (OnTaskDequeued != null) OnTaskDequeued();
return ret;
}
}
In your function that instantiates the 'loop' you need to do something like...
TaskManager<Task<int>> tasks = new TaskManager<Task<int>>();
Task<int> task = new Task<int>(i => 3 + 4, WriteIntToScreen); // WriteIntToScreen is a fake function to use as the callback
tasks.Enqueue(task);
tasks.OnTaskDequeued += delegate
{
tasks.Enqueue(task);
tasks.Dequeue.Invoke();
};
// start the routine with
tasks.Dequeue.Invoke(); // you call do some async threading here with BeginInvoke or something but I am not gonna write all that out as it will be pages...
To cancel you just empty the queue.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is the correct way to delay the start of a Task in c#
I need to schedule small tasks to be executed in the future (delay is always < 1minute). Implementation is in .NET on the .NET 4.0 runtime. Async ctp is an option, although I don't see the added value for the moment.
the scheduling needs to be async
scheduling resolution is in seconds
the task execution is implicitly async (i think)
the number of scheduled tasks could be in the hundreds or even thousands
it is possible, however unlikely, that two tasks will be scheduled at the exact same time
My current solution is this:
internal class TimerState
{
internal Timer Timer { get; set; }
internal object Payload { get; set; }
internal Action<object> Action { get; set; }
}
public class TimerModule
{
public static void ScheduleTask(object input, Action<object> action, TimeSpan delay)
{
//create state to pass to timer method
var state = new TimerState { Payload = input, Action = action };
//schedule timer without firing
var t = new Timer(HandleScheduleTimer, state, -1, -1);
//add timer to state to be able to dispose it
state.Timer = t;
//schedule timer to fire in delay time
t.Change(delay, TimeSpan.FromMilliseconds(-1));
}
private static void HandleScheduleTimer(object state)
{
var s = state as TimerState;
Task.Factory.StartNew(s.Action, s.Payload, CancellationToken.None,
TaskCreationOptions.PreferFairness, TaskScheduler.Current);
//dispose the timer immediately
if(s.Timer != null)
s.Timer.Dispose();
}
}
I've done some tests with performance counters (.NET physical threads), but I don't see that many threads running at the same time, even though I schedule thousands of tasks at approximately the same time.
Is there a better way to do this?
Are there any proven design patterns around this?
Mostly I've found scheduling to be reliable across restarts, but I don't need that, I can replay the data in the system after a crash and compensate for the scheduled tasks that weren't executed.
Edit: I don't mean that I want to see thousands of threads running at the same time. I'm aware that this will probably be handled with the threadpool. Like I say in the comments, a test with 5M tasks spanned over 100 seconds only sees an increase of 20 in the physical threads.
My main question is this: is there a better way to delay task execution?
The solution you're looking for is Quartz.
http://quartznet.sourceforge.net/
It's a very robust scheduler capable of handling different jobs tied to different schedules. It is far more reliable than a timer. It has failover if anything goes wrong.
The schedules either use a CRON interval or can run on specific dates. I've heard of instances having thousands of jobs. Quartz can be clustered if need be, and can run stateful (keeping knowledge of previous instances) and non-stateful jobs. Depending on how they're configured, they can run simultaneously or in a queue. Most thread and thread safety issues have been taken care of for you, you're free to write your job class without headache.
Set up is simple (install service, and configure), creating quartz jobs are even simpler. Create a class that inherits from IJob, apply your logic, and add it to the quartz config. Config can be an xml file, sql server database, or you can create your own solution. I'm working on one for RavenDB.
Writing an infinite loop is simple:
while(true){
//add whatever break condition here
}
But this will trash the CPU performance. This execution thread will take as much as possible from CPU's power.
What is the best way to lower the impact on CPU?
Adding some Thread.Sleep(n) should do the trick, but setting a high timeout value for Sleep() method may indicate an unresponsive application to the operating system.
Let's say I need to perform a task each minute or so in a console app.
I need to keep Main() running in an "infinite loop" while a timer will fire the event that will do the job. I would like to keep Main() with the lowest impact on CPU.
What methods do you suggest. Sleep() can be ok, but as I already mentioned, this might indicate an unresponsive thread to the operating system.
LATER EDIT:
I want to explain better what I am looking for:
I need a console app not Windows service. Console apps can simulate the Windows services on Windows Mobile 6.x systems with Compact Framework.
I need a way to keep the app alive as long as the Windows Mobile device is running.
We all know that the console app runs as long as its static Main() function runs, so I need a way to prevent Main() function exit.
In special situations (like: updating the app), I need to request the app to stop, so I need to infinitely loop and test for some exit condition. For example, this is why Console.ReadLine() is no use for me. There is no exit condition check.
Regarding the above, I still want Main() function as resource friendly as possible. Let asside the fingerprint of the function that checks for the exit condition.
To avoid the infinity loop simply use a WaitHandle. To let the process be exited from the outer world use a EventWaitHandle with a unique string. Below is an example.
If you start it the first time, it simple prints out a message every 10 seconds. If you start in the mean time a second instance of the program it will inform the other process to gracefully exit and exits itself also immediately. The CPU usage for this approach: 0%
private static void Main(string[] args)
{
// Create a IPC wait handle with a unique identifier.
bool createdNew;
var waitHandle = new EventWaitHandle(false, EventResetMode.AutoReset, "CF2D4313-33DE-489D-9721-6AFF69841DEA", out createdNew);
var signaled = false;
// If the handle was already there, inform the other process to exit itself.
// Afterwards we'll also die.
if (!createdNew)
{
Log("Inform other process to stop.");
waitHandle.Set();
Log("Informer exited.");
return;
}
// Start a another thread that does something every 10 seconds.
var timer = new Timer(OnTimerElapsed, null, TimeSpan.Zero, TimeSpan.FromSeconds(10));
// Wait if someone tells us to die or do every five seconds something else.
do
{
signaled = waitHandle.WaitOne(TimeSpan.FromSeconds(5));
// ToDo: Something else if desired.
} while (!signaled);
// The above loop with an interceptor could also be replaced by an endless waiter
//waitHandle.WaitOne();
Log("Got signal to kill myself.");
}
private static void Log(string message)
{
Console.WriteLine(DateTime.Now + ": " + message);
}
private static void OnTimerElapsed(object state)
{
Log("Timer elapsed.");
}
You can use System.Threading.Timer Class which provides ability to execute callback asynchronously in a given period of time.
public Timer(
TimerCallback callback,
Object state,
int dueTime,
int period
)
As alternative there is System.Timers.Timer class which exposes Elapsed Event which raises when a given period of time is elapsed.
Why would you condone the use of an infinite loop? For this example would setting the program up as a scheduled task, to be run every minute, not be more economical?
Why don't you write a small application and use the system's task scheduler to run it every minute, hour...etc?
Another option would be to write a Windows Service which runs in the background. The service could use a simple Alarm class like the following on MSDN:
http://msdn.microsoft.com/en-us/library/wkzf914z%28v=VS.90%29.aspx#Y2400
You can use it to periodically trigger your method. Internally this Alarm class uses a timer:
http://msdn.microsoft.com/en-us/library/system.timers.timer.aspx
Just set the timer's interval correctly (e.g. 60000 milliseconds) and it will raise the Elapsed event periodically. Attach an event handler to the Elapsed event to perform your task. No need to implement an "infinite loop" just to keep the application alive. This is handled for you by the service.
I did this for an application that had to process files as they were dropped on a folder. Your best bet is a timer (as suggested) with a Console.ReadLine() at the end of "main" without putting in a loop.
Now, your concern about telling the app to stop:
I have also done this via some rudimentary "file" monitor. Simply creating the file "quit.txt" in the root folder of the application (by either my program or another application that might request it to stop) will make the application quit. Semi-code:
<do your timer thing here>
watcher = new FileSystemWatcher();
watcher.Path = <path of your application or other known accessible path>;
watcher.Changed += new FileSystemEventHandler(OnNewFile);
Console.ReadLine();
The OnNewFile could be something like this:
private static void OnNewFile(object source, FileSystemEventArgs e)
{
if(System.IO.Path.GetFileName(e.FullPath)).ToLower()=="quit.txt")
... remove current quit.txt
Environment.Exit(1);
}
Now you mentioned that this is (or could be) for a mobile application? You might not have the file system watcher. In that case, maybe you just need to "kill" the process (you said "In special situations (like: updating the app), I need to request the app to stop". Whoever the "requester" to stop it is, should simply kill the process)
It sounds to me like you want Main() to enter an interruptable loop. For this to happen, multiple threads must be involved somewhere (or your loop must poll periodically; I am not discussing that solution here though). Either another thread in the same application, or a thread in another process, must be able to signal to your Main() loop that it should terminate.
If this is true, then I think you want to use a ManualResetEvent or an EventWaitHandle . You can wait on that event until it is signalled (and the signalling would have to be done by another thread).
For example:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
static void Main(string[] args)
{
startThreadThatSignalsTerminatorAfterSomeTime();
Console.WriteLine("Waiting for terminator to be signalled.");
waitForTerminatorToBeSignalled();
Console.WriteLine("Finished waiting.");
Console.ReadLine();
}
private static void waitForTerminatorToBeSignalled()
{
_terminator.WaitOne(); // Waits forever, but you can specify a timeout if needed.
}
private static void startThreadThatSignalsTerminatorAfterSomeTime()
{
// Instead of this thread signalling the event, a thread in a completely
// different process could do so.
Task.Factory.StartNew(() =>
{
Thread.Sleep(5000);
_terminator.Set();
});
}
// I'm using an EventWaitHandle rather than a ManualResetEvent because that can be named and therefore
// used by threads in a different process. For intra-process use you can use a ManualResetEvent, which
// uses slightly fewer resources and so may be a better choice.
static readonly EventWaitHandle _terminator = new EventWaitHandle(false, EventResetMode.ManualReset, "MyEventName");
}
}
You can use Begin-/End-Invoke to yield to other threads. E.g.
public static void ExecuteAsyncLoop(Func<bool> loopBody)
{
loopBody.BeginInvoke(ExecuteAsyncLoop, loopBody);
}
private static void ExecuteAsyncLoop(IAsyncResult result)
{
var func = ((Func<bool>)result.AsyncState);
try
{
if (!func.EndInvoke(result))
return;
}
catch
{
// Do something with exception.
return;
}
func.BeginInvoke(ExecuteAsyncLoop, func);
}
You would use it as such:
ExecuteAsyncLoop(() =>
{
// Do something.
return true; // Loop indefinitely.
});
This used 60% of one core on my machine (completely empty loop). Alternatively, you can use this (Source) code in the body of your loop:
private static readonly bool IsSingleCpuMachine = (Environment.ProcessorCount == 1);
[DllImport("kernel32", ExactSpelling = true)]
private static extern void SwitchToThread();
private static void StallThread()
{
// On a single-CPU system, spinning does no good
if (IsSingleCpuMachine) SwitchToThread();
// Multi-CPU system might be hyper-threaded, let other thread run
else Thread.SpinWait(1);
}
while (true)
{
// Do something.
StallThread();
}
That used 20% of one core on my machine.
To expound on a comment CodeInChaos made:
You can set a given thread's priority. Threads are scheduled for execution based on their priority. The scheduling algorithm used to determine the order of thread execution varies with each operating system. All threads default to "normal" priority, but if you set your loop to low; it shouldn't steal time from threads set to normal.
The Timer approach is probably your best bet, but since you mention Thread.Sleep there is an interesting Thread.SpinWait or SpinWait struct alternative for similar problems that can sometimes be better than short Thread.Sleep invocations.
Also see this question: What's the purpose of Thread.SpinWait method?
Lots of "advanced" answers here but IMO simply using a Thread.Sleep(lowvalue) should suffice for most.
Timers are also a solution, but the code behind a timer is also an infinity loop - I would assume - that fires your code on elapsed intervals, but they have the correct infinity-loop setup.
If you need a large sleep, you can cut it into smaller sleeps.
So something like this is a simple and easy 0% CPU solution for a non-UI app.
static void Main(string[] args)
{
bool wait = true;
int sleepLen = 1 * 60 * 1000; // 1 minute
while (wait)
{
//... your code
var sleepCount = sleepLen / 100;
for (int i = 0; i < sleepCount; i++)
{
Thread.Sleep(100);
}
}
}
Regarding how the OS detects if the app is unresponsive. I do not know of any other tests than on UI applications, where there are methods to check if the UI thread processes UI code. Thread sleeps on the UI will easily be discovered. The Windows "Application is unresponsive" uses a simple native method "SendMessageTimeout" to see detect if the app has an unresponse UI.
Any infinity loop on an UI app should always be run in a separate thread.
To keep console applications running just add a Console.ReadLine() to the end of your code in Main().
If the user shouldn't be able to terminate the application you can do this with a loop like the following:
while (true){
Console.ReadLine();
}
I'm writing a windows service that should perform an action every, lets say, 60 seconds.
How is the best way to implement that main loop?
Implementations I've seen so far:
1) Using a Timer object that executes a delegate every xx seconds
2) Using ManualResetEvents (the implementation I've seen only executes once, but as far as I understood, it is possible to create a loop with such resetevents)
The windows service will run all the time, so it would be best to create a service that has no memory leak.
What is the best way to implement that main loop?
Edit after comments:
The action that will be performed every X seconds will start several (lets say max 10) threads. Each thread does not run longer than 30 seconds
Use a Timer. This will make the intention of the program the most clear. It is easy to start and stop the timer from your OnStart and OnStop methods, and the callbacks will fire on the thread pool so you won't tie up a thread. The Timer object won't leak memory by itself. (You could still write a bug that leaks memory, but that's equally easy to do with any implementation of the main loop.)
Consider using Quartz.net. I'm using this library and I'm very happy with it. You could set custom cron schedule that will suit your needs.
If you do use a system.timers.timer make sure to set autoreset to false and start it and the end of your process. Here's a full example
Needed: A Windows Service That Executes Jobs from a Job Queue in a DB; Wanted: Example Code
If there is no chance that your action will not ever take longer than xx seconds I would just go with the timer. If not I would go with the ManualResetEvents. I assume you do not want more than one action to run concurrently.
Here is another pretty common pattern using a ManualResetEvent as both a stopping and a throttling mechanism.
public class Example
{
private Thread m_Thread;
private ManualResetEvent m_StopSignal = new ManualResetEvent(false);
public void Start()
{
m_Thread = new Thread(Run);
m_Thread.Start();
}
public void Stop()
{
m_StopSignal.Set();
if (!m_Thread.Join(MAX_WAIT_TIME))
{
m_Thread.Abort() // Abort as a last resort.
}
}
private void Run()
{
while (!m_StopSignal.WaitOne(YOUR_INTERVAL))
{
// Your task goes here.
}
}
}