Reduce CPU usage while processing large amount of data - c#

I am writing a real time application which receives around 2000 messages per second which was pushed in a queue. I have written a background thread which process the messages in the queue.
private void ProcessSocketMessage()
{
while (!this.shouldStopProcessing)
{
while (this.messageQueue.Count > 0)
{
string message;
bool result = this.messageQueue.TryDequeue(out message);
if (result)
{
// Process the string and do some other stuff
// Like updating the received message in a datagrid
}
}
}
}
The problem with the above code is that it uses insane amount of processing power around 12% of CPU(2.40 GHz dual core processor).
I have 4 blocks similar to the one above which literally takes up 50 % of CPU computing power.
Is there anything which can be optimized in the above code?
Adding a Thread Sleep of 100 ms before second while loop end does seems to be increase the performance by 50%. But am I doing something wrong?

This functionality is already provided in the Dataflow library's ActionBlock class. An ActionBlock has an input buffer that receives messages and processes them by calling an action for each one. By default, only one message is processed at a time. It doesn't use busy waiting.
void MyActualProcessingMethod(string it)
{
// Process the string and do some other stuff
}
var myBlock = new ActionBlock<string>( someString =>MyActualProcessingMethod(someString));
//Simulate a lot of messages
for(int i=0;i<100000;i++)
{
myBlock.Post(someMessage);
}
When the messages finish and/or we don't want any more messages, we command it to complete, by refusing any new messages and processing anything left in the input buffer:
myBlock.Complete();
Before we finish, we need to actually await for the block to finish processing the leftovers:
await myBlock.Completion;
All Dataflow blocks can accept messages from multiple clients.
Blocks can be combined as well. The output of one block can feed another. The TransformBlock accepts a function that transforms an input into an output.
Typically each block uses tasks from the thread pool. By default one block processes only one message at a time. Different blocks run on different tasks or even different TaskSchedulers. This way, you can have one block do some heavy processing and push a result to another block that updates the UI.
string MyActualProcessingMethod(string it)
{
// Process the string and do some other stuff
// and send a progress message downstream
return SomeProgressMessage;
}
void UpdateTheUI(string msg)
{
statusBar1.Text = msg;
}
var myProcessingBlock = new TransformBlock<string,string>(msg =>MyActualProcessingMethod(msg));
The UI will be updated by another block that runs on the UI thread. This is expressed through the ExecutionDataflowBlockOptions :
var runOnUI=new ExecutionDataflowBlockOptions {
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
};
var myUpdater = new ActionBlock<string>(msg => UpdateTheUI(msg),runOnUI);
//Pass progress messages from the processor to the updater
myProcessingBlock.LinkTo(myUpdater,new DataflowLinkOptions { PropagateCompletion = true });
The code that posts messages to the pipeline's first block doesn't change :
//Simulate a lot of messages
for(int i=0;i<100000;i++)
{
myProcessingBlock.Post(someMessage);
}
//We are finished, tell the block to process any leftover messages
myProcessingBlock.Complete();
In this case, as soon as the procesor completes it will notify the next block in the pipeline to complete. We need to wait for that final block to complete as well
//Wait for the block to finish
await myUpdater.Completion;
How about making the first block work in parallel? We can specify that up to eg 10 tasks will be used to process input messages through its execution options :
var dopOptions = new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism = 10};
var myProcessingBlock = new TransformBlock<string,string>(msg =>MyActualProcessingMethod(msg),dopOptions);
The processor will process up to 10 messages in parallel but the updater will still process them one by one, in the UI thread.

You're best bet is to use a profile to monitor the running application and determine for sure where the CPU is spending it's time.
However, it looks like you have the possibility for a busy-wait loop if this.messageQueue.Count is 0. At minimum, I would suggest adding a small pause if the queue is empty to allow a message to go onto the queue. Otherwise your CPU is just spending time checking the queue over and over and over.
If the time is spent dequeueing messages, you may want to consider handling multiple messages at once (if there are multiple messages available), assuming you're queue allows you to pop multiple messages off the queue in a single call.

Related

C# Producer/Consumer queue with delays

The problem is the classic N producers (with N possibly large) and X consumers with limited resources (X is currently 4). The producers messages come in (say via MQTT) and get queued to be processed by consumers on a FIFO basis. The important part of the processing is that each consumer may need to send back to the producers one or more "replies" and such replies should be at least some time apart (the exact delay is not important). The classic solution where one starts X tasks that wait on the message queue, process and loop is easy to implement using, for example, System.Threading.Channels:
while (!cancellationToken.IsCancellationRequested && await queue.Reader.WaitToReadAsync()) {
while (queue.Reader.TryRead(out IncomingMessage item)) {
// Do some processing.
SendResponse(1);
// Do some more processing.
if (needsToSend2Response) {
await Task.Delay(500);
SendResponse(2);
}
}
}
This works and works well except that if a task needs to delay it can't process any more messages and that's obviously bad.
Possible solutions I thought of:
Use an outbound queue that process the messages and makes sure there is at least a minimum delay between messages sent to the same producer.
Don't use a queue. Just start a new Task every time a new message comes in and arbitrate the limited resources using a semaphore: it works but I don't see how to guarantee the FIFO requirement (some times messages from the same producer are processed in the wrong order).
Any other ideas?

Handle rabbitmq messages concurrenrtly

I asked a question here about why starting a process using Thread.Run did not execute as many concurrent requests as I expected.
The reason behind this question was that I was trying to create a class which can pull messages off a rabbitmq queue and process them concurrently up to a maximum number of concurrent messages.
To do this I ended up with the following in the Received handler of the EventingBasicConsumer class.
async void Handle(EventArgs e)
{
await _semaphore.WaitAsync();
var thread = new Thread(() =>
{
Process(e);
_semaphore.Release();
_channel.BasicAck(....);
});
thread.Start();
}
However the comments on the previous post were not to start a thread unless doing CPU bound work.
The above handler does not know whether the work will be CPU bound, Network, Disk or otherwise. (Process is an abstract method).
Even so I think I have to start a thread or task here, otherwise the Process method blocks the rabbitmq thread and the event handler is not called again until it is finished. So I can only handle one method at once.
Is starting a new Thread here okay? Originally I had used Task.Run but this didn't produce as many workers as wanted. See other post.
FYI. The number of concurrent threads is capped by setting the InitialCount on the semaphore.
As already been said in linked question, big number of threads doesn't guarantee the performance, as if their number gets more than the number of logical cores, you got a thread starvation situation with no real work being done.
However, if you still need to handle the number of a concurrent operations, you may give a try to the TPL Dataflow library, with settings up the MaxDegreeOfParallelism, like in this tutorial.
var workerBlock = new ActionBlock<EventArgs>(
// Process event
e => Process(e),
// Specify a maximum degree of parallelism.
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = InitialCount
});
var bufferBlock = new BufferBlock();
// link the blocks for automatically propagading the messages
bufferBlock.LinkTo(workerBlock);
// asynchronously send the message
await bufferBlock.SendAsync(...);
// synchronously send the message
bufferBlock.Post(...);
BufferBlock is a queue, so the order of messages will be preserved. Also, you can add the different handlers (with a different degree of parallelism) with linking the blocks with filter lambda:
bufferBlock.LinkTo(cpuWorkerBlock, e => e is CpuEventArgs);
bufferBlock.LinkTo(networkWorkerBlock, e => e is NetworkEventArgs);
bufferBlock.LinkTo(diskWorkerBlock, e => e is DiskEventArgs);
but in this case you should setup a default handler at the end of the chain, so the message wouldn't disappear (you may use a NullTarget block for this):
bufferBlock.LinkTo(DataflowBlock.NullTarget<EventArgs>);
Also, the block could be an observers, so they perfectly work with Reactive Extensions on UI side.

Possibility of Semaphoreslim.Wait(0) (to prevent multiple execution) causing non execution

The situation I am uncertain of concerns the usage of a "threadsafe" PipeStream where multiple threads can add messages to be written. If there is no queue of messages to be written, the current thread will begin writing to the reading party. If there is a queue, and the queue grows while the pipe is writing, I want the thread that begun writing to deplete the queue.
I "hope" that this design (demonstrated below) discourages the continuous entering/releasing of the SemaphoreSlim and decrease the number of tasks scheduled. I say "hope" because I should test whether this complication has any positive performance implications. However, before even testing this I should first understand if the code does what I think it will, so please consider the following class, and below it a sequence of events;
Note: I understand that execution of tasks is not tied to any particular thread, but I find this is the easiest way to explain.
class SemaphoreExample
{
// Wrapper around a NamedPipeClientStream
private readonly MessagePipeClient m_pipe =
new MessagePipeClient("somePipe");
private readonly SemaphoreSlim m_semaphore =
new SemaphoreSlim(1, 1);
private readonly BlockingCollection<Message> m_messages =
new BlockingCollection<Message>(new ConcurrentQueue<Message>());
public Task Send<T>(T content)
where T : class
{
if (!this.m_messages.TryAdd(new Message<T>(content)))
throw new InvalidOperationException("No more requests!");
Task dequeue = TryDequeue();
return Task.FromResult(true);
// In reality this class (and method) is more complex.
// There is a similiar pipe (and wrkr) in the other direction.
// The "sent jobs" is kept in a dictionary and this method
// returns a task belonging to a completionsource tied
// to the "sent job". The wrkr responsible for the other
// pipe reads a response and sets the corresponding
// completionsource.
}
private async Task TryDequeue()
{
if (!this.m_semaphore.Wait(0))
return; // someone else is already here
try
{
Message message;
while (this.m_messages.TryTake(out message))
{
await this.m_pipe.WriteAsync(message);
}
}
finally { this.m_semaphore.Release(); }
}
}
Wrkr1 finishes writing to the pipe. (in TryDequeue)
Wrkr1 determines queue is empty. (in TryDequeue)
Wrkr2 adds item to queue. (in Send)
Wrkr2 determines Wrkr1 occupies the Semaphore, returns. (in Send)
Wrkr1 releases the Semaphore. (in TryDequeue)
Queue is left with 1 item that wont be acted upon for x amount of Time.
Is this sequence of events possible? Should I forget this idea altogether and have every call to "Send" await on "TryDeque" and the semaphore within it? Perhaps the potential performance implications of scheduling another task per method call is negligible, even at a "high" frequency.
UPDATE:
Following the advice of Alex I am doing the following;
Let the caller of "Send" specify a "maxWorkload" integer that specifies how many items the caller is prepared to do (for other callers, in the worst case) before delegating work to another thread to handle any extra work. However, before creating the new thread, other callers of "Send" is given an opportunity to enter the semaphore, thereby possibly preventing the use of an additional thread.
To not let any work be left lingering in the queue, any worker who successfully entered the semaphore and did some work must check if there is any new work added after exiting the semaphore. If this is true the same worker will try to re-enter (if "maxWorkload" is not reached) or delegate work as described above.
Example below: Send now sets up "TryPool" as a continuation of "TryDequeue". "TryPool" only begins if "TryDequeue" returns true (i.e. did some work while having entered the semaphore).
// maxWorkload cannot be -1 for this method
private async Task<bool> TryDequeue(int maxWorkload)
{
int currWorkload = 0;
while (this.m_messages.Count != 0 && this.m_semaphore.Wait(0))
{
try
{
currWorkload = await Dequeue(currWorkload, maxWorkload);
if (currWorkload >= maxWorkload)
return true;
}
finally
{
this.m_semaphore.Release();
}
}
return false;
}
private Task TryPool()
{
if (this.m_messages.Count == 0 || !this.m_semaphore.Wait(0))
return Task<bool>.FromResult(false);
return Task.Run(async () =>
{
do
{
try
{
await Dequeue(0, -1);
}
finally
{
this.m_semaphore.Release();
}
}
while (this.m_messages.Count != 0 && this.m_semaphore.Wait(0));
});
}
private async Task<int> Dequeue(int currWorkload, int maxWorkload)
{
while (currWorkload < maxWorkload || maxWorkload == -1)
{
Message message;
if (!this.m_messages.TryTake(out message))
return currWorkload;
await this.m_pipe.WriteAsync(message);
currWorkload++;
}
return maxWorkload;
}
I tend to call this pattern the "GatedBatchWriter", i.e. the first thread through the gate handles a batch of tasks; its own and a number of others on behalf of other writers, until it has done enough work.
This pattern is primarily useful, when it is more efficient to batch work, because of overheads associated with that work. E.g. writing larger blocks to disk in one go, instead of multiple small ones.
And yes, this particular pattern has a specific race condition to be aware of: The "responsible writer", i.e. the one that got through the gate, determines that no more messages are in the queue and stops before releasing the semaphore (i.e. its write responsibility). A second writer arrived and in between those two decision points failed to acquire write responsibility. Now there is a message in the queue that will not be delivered (or delivered late, when the next writer arrives).
Additionally, what you are doing now, is not fair, in terms of scheduling. If there are many messages, the queue might never be empty, and the writer that got through the gate will be busy writing messages on behalf of the others for all eternity. You need to limit the batch size for the responsible writer.
Some other things you may want to change are:
Have your Message contain a task completion token.
Have writers that could not acquire the write responsibility enqueue their message and wait for any of two task completions: the task completion associated with their message, the releasing of the write responsibility.
Have the responsible writer set the completion for messages that it processed.
Have the responsible writer release it's write responsibility when it has done enough work.
When a waiting writer is woken up by one of the two task completions:
if it was due to the completion token on its message, it can go its merry way.
otherwise, try to acquire the write responsibility, rinse, repeat...
One more note: if there are a lot of messages, i.e. a high message load on average, a dedicated thread / long running task handling the queue will generally have a better performance.

Usage multithreading could lead to excessive memory use

I'm having a windows service project that logs messages to a database (or other place). The frequency of these messages could go up to ten per second. Since sending and processing the messages shouldn't delay the main process of the service I start a new thread for the processing of every message. This means that if the main process needs to send 100 log messages, 100 threads are started that process each message. I learned that when a thread is done, it will be cleaned so I don't have to dispose it. As long as I dispose all used objects in the thread everything should be working fine.
The service could go into a exception that leads to shutting down the service. Before the service shuts down it should wait for all threads that were logging messages. To achieve this it adds the thread to a list every time a thread is started. When the wait-for-threads method is called, all threads in the list are checked if it is still alive and if so, it uses join to wait for it.
The code:
Creating the thread:
/// <summary>
/// Creates a new thread and sends the message
/// </summary>
/// <param name="logMessage"></param>
private static void ThreadSend(IMessage logMessage)
{
ParameterizedThreadStart threadStart = new ParameterizedThreadStart(MessageHandler.HandleMessage);
Thread messageThread = new Thread(threadStart);
messageThread.Name = "LogMessageThread";
messageThread.Start(logMessage);
threads.Add(messageThread);
}
The waiting for threads to end:
/// <summary>
/// Waits for threads that are still being processed
/// </summary>
public static void WaitForThreads()
{
int i = 0;
foreach (Thread thread in threads)
{
i++;
if (thread.IsAlive)
{
Debug.Print("waiting for {0} - {1} to end...", thread.Name, i);
thread.Join();
}
}
}
Now my main concern is if this service runs for a month it will still have all threads (millions) in the list (most of them dead). This will eat memory and I don't know how much. This in whole doesn't seem to be a good practice to me, I want to clean up finished threads but I can't find out how to do it. Does any one have a good or best practice for this?
Remove the threads from the list if they are dead?
/// <summary>
/// Waits for threads that are still being processed
/// </summary>
public static void WaitForThreads()
{
List<Thread> toRemove = new List<int>();
int i = 0;
foreach (Thread thread in threads)
{
i++;
if (thread.IsAlive)
{
Debug.Print("waiting for {0} - {1} to end...", thread.Name, i);
thread.Join();
}
else
{
toRemove.Add(thread);
}
}
threads.RemoveAll(x => toRemove.Contains(x));
}
Have a look at Task Parallelism
First of all: Creating one thread per log message is not a good idea. Either use ThreadPool or create a limited number of worker threads which handle the log items from a common queue (producer/consumer).
Second: Of course you need to also remove the thread references from the list! Either when the thread method ends, it can remove itself, or you can even do it on a regular basis. For example, have a timer run every half and hour that checks the list for dead threads and removes them.
If all you're doing in those threads is logging, you should probably have a single logging thread and a shared queue that the main thread puts messages on. The logging thread can then read the queue and log. This is incredibly easy with the BlockingCollection.
Create the queue in the service's main thread:
BlockingCollection<IMessage> LogMessageQueue = new BlockingCollection<IMessage>();
Your service's main thread creates a Logger (see below) instance, which starts a thread to process log messages. The main thread adds items to the LogMessageQueue. The logger thread reads them from the queue. When the main thread wants to shut down, it calls LogMessageQueue.CompleteAdding. The logger will empty the queue and exit.
Main thread would look like this:
// start the logger
Logger _loggingThread = new Logger(LogMessageQueue);
// to log a message:
LogMessageQueue.Add(logMessage);
// when the program needs to shut down:
LogMessageQueue.CompleteAdding();
And the logger class:
class Logger
{
BlockingCollection<IMessage> _queue;
Thread _loggingThread;
public Logger(BlockingCollection<IMessage> queue)
{
_queue = queue;
_loggingThread = new Thread(LoggingThreadProc);
}
private void LoggingThreadProc(object state)
{
IMessage msg;
while (_queue.TryTake(out msg, TimeSpan.Infinite))
{
// log the item
}
}
}
This way you have just one additional thread, messages are guaranteed to be processed in the order they're sent (not true of your current approach), and you don't have to worry about keeping track of thread shutdown, etc.
Update
If some of your log messages will take time to process (the email you described, for example), you can process them asynchronously. For example:
while (_queue.TryTake(out msg, TimeSpan.Infinite))
{
if (msg.Type == Email)
{
// start asynchronous task to send email
}
else
{
// write to log file
}
}
This way, only those messages that potentially take lots of time will run asynchronously. You can also have a secondary queue there if you want, for the email messages. That way you won't get bogged down with a bunch of email threads. Rather, you limit it to one or two, or perhaps a handful.
Note that you can also have multiple Logger instances if you want, all reading from the same message queue. Just make sure they're each writing to a different log file. The queue itself will support multiple consumers.
I think in general the approach to solve your issue is maybe not the best practice.
I mean, instead of creating 1000s of threads, you just want to store 1000s of messages in a database right? And it seems you want to do this asynchronously.
But creating a thread for each message is not really a good idea and actually does not solve that issue...
Instead I would try to implement something like message queues. You can have multiple queues and each queue has its own thread. If messages are coming in, you send them to one of the queues (alternating)...
The queue either waits for a certain amount of messages, or always waits a certain amount of time (e.g. 1 second, depends of how long it takes to store e.g. 100 messages within the database) until it tries to store the queued messages in the database.
This way you should actually always have a constant number of threads and you shouldn't see any performance issues...
Also it would enable you to batch insert data and not only one by one with the overhead of db connections etc...
Of cause, if your database is slower then the tasks are able to store the messages, more and more messages will be queued... But that's true for your current solution, also.
Since multiple answers and comments led to my solution I will post the complete code here.
I used threadpool to manage the threads and code from this page for the wating function.
Creating the thread:
private static void ThreadSend(IMessage logMessage)
{
ThreadPool.QueueUserWorkItem(MessageHandler.HandleMessage, logMessage);
}
Waiting for the threads to finish:
public static bool WaitForThreads(int maxWaitingTime)
{
int maxThreads = 0;
int placeHolder = 0;
int availableThreads = 0;
while (maxWaitingTime > 0)
{
System.Threading.ThreadPool.GetMaxThreads(out maxThreads, out placeHolder);
System.Threading.ThreadPool.GetAvailableThreads(out availableThreads, out placeHolder);
//Stop if all threads are available
if (availableThreads == maxThreads)
{
return true;
}
System.Threading.Thread.Sleep(TimeSpan.FromMilliseconds(1000));
--maxWaitingTime;
}
return false;
}
Optionally you can add this somewhere outside these methods to limit the amount of threads in the pool.
System.Threading.ThreadPool.SetMaxThreads(MaxWorkerThreads, MaxCompletionPortThreads);

Throttling speed of email sending process

Sorry the title is a bit crappy, I couldn't quite word it properly.
Edit: I should note this is a console c# app
I've prototyped out a system that works like so (this is rough pseudo-codeish):
var collection = grabfromdb();
foreach (item in collection) {
SendAnEmail();
}
SendAnEmail:
SmtpClient mailClient = new SmtpClient;
mailClient.SendCompleted += new SendCompletedEventHandler(SendComplete);
mailClient.SendAsync('the mail message');
SendComplete:
if (anyErrors) {
errorHandling()
}
else {
HitDBAndMarkAsSendOK();
}
Obviously this setup is not ideal. If the initial collection has, say 10,000 records, then it's going to new up 10,000 instances of smtpclient in fairly short order as quickly as it can step through the rows - and likely asplode in the process.
My ideal end game is to have something like 10 concurrent email going out at once.
A hacky solution comes to mind: Add a counter, that increments when SendAnEmail() is called, and decrements when SendComplete is sent. Before SendAnEmail() is called in the initial loop, check the counter, if it's too high, then sleep for a small period of time and then check it again.
I'm not sure that's such a great idea, and figure the SO hive mind would have a way to do this properly.
I have very little knowledge of threading and not sure if it would be an appropriate use here. Eg sending email in a background thread, first check the number of child threads to ensure there's not too many being used. Or if there is some type of 'thread throttling' built in.
Update
Following in the advice of Steven A. Lowe, I now have:
A Dictionary holding my emails and a unique key (this is the email que
A FillQue Method, which populates the dictionary
A ProcessQue method, which is a background thread. It checks the que, and SendAsycs any email in the que.
A SendCompleted delegate which removes the email from the que. And calls FillQue again.
I've a few problems with this setup. I think I've missed the boat with the background thread, should I be spawning one of these for each item in the dictionary? How can I get the thread to 'hang around' for lack of a better word, if the email que empties the thread ends.
final update
I've put a 'while(true) {}' in the background thread. If the que is empty, it waits a few seconds and tries again. If the que is repeatedly empty, i 'break' the while, and the program ends... Works fine. I'm a bit worried about the 'while(true)' business though..
Short Answer
Use a queue as a finite buffer, processed by its own thread.
Long Answer
Call a fill-queue method to create a queue of emails, limited to (say) 10. Fill it with the first 10 unsent emails. Launch a thread to process the queue - for each email in the queue, send it asynch. When the queue is empty sleep for a while and check again. Have the completion delegate remove the sent or errored email from the queue and update the database, then call the fill-queue method to read more unsent emails into the queue (back up to the limit).
You'll only need locks around the queue operations, and will only have to manage (directly) the one thread to process the queue. You will never have more than N+1 threads active at once, where N is the queue limit.
I believe your hacky solution actually would work. Just make sure you have a lock statement around the bits where you increment and decrement the counter:
class EmailSender
{
object SimultaneousEmailsLock;
int SimultaneousEmails;
public string[] Recipients;
void SendAll()
{
foreach(string Recipient in Recipients)
{
while (SimultaneousEmails>10) Thread.Sleep(10);
SendAnEmail(Recipient);
}
}
void SendAnEmail(string Recipient)
{
lock(SimultaneousEmailsLock)
{
SimultaneousEmails++;
}
... send it ...
}
void FinishedEmailCallback()
{
lock(SimultaneousEmailsLock)
{
SimultaneousEmails--;
}
... etc ...
}
}
I would add all my messages to a Queue, and then spawn i.e. 10 threads which sent emails until the Queue was empty. Pseudo'ish C# (probably wont compile):
class EmailSender
{
Queue<Message> messages;
List<Thread> threads;
public Send(IEnumerable<Message> messages, int threads)
{
this.messages = new Queue<Message>(messages);
this.threads = new List<Thread>();
while(threads-- > 0)
threads.Add(new Thread(SendMessages));
threads.ForEach(t => t.Start());
while(threads.Any(t => t.IsAlive))
Thread.Sleep(50);
}
private SendMessages()
{
while(true)
{
Message m;
lock(messages)
{
try
{
m = messages.Dequeue();
}
catch(InvalidOperationException)
{
// No more messages
return;
}
}
// Send message in some way. Not in an async way,
// since we are already kind of async.
Thread.Sleep(); // Perhaps take a quick rest
}
}
}
If the message is the same, and just having many recipients, just swap the Message with a Recipient, and add a single Message parameter to the Send method.
You could use a .NET Timer to setup the schedule for sending messages. Whenever the timer fires, grab the next 10 messages and send them all, and repeat. Or if you want a general (10 messages per second) rate you could have the timer fire every 100ms, and send a single message every time.
If you need more advanced scheduling, you could look at a scheduling framework like Quartz.NET
Isn't this something that Thread.Sleep() can handle?
You are correct in thinking that background threading can serve a good purpose here. Basically what you want to do is create a background thread for this process, let it run its own way, delays and all, and then terminate the thread when the process is done, or leave it on indefinitely (turning it into a Windows Service or something similar will be a good idea).
A little intro on multi-threading can be read here (with Thread.Sleep included!).
A nice intro on Windows Services can be read here.

Categories

Resources