I have an object that requires a lot of initialization (1-2 seconds on a beefy machine). Though once it is initialized it only takes about 20 miliseconds to do a typical "job"
In order to prevent it from being re-initialized every time an app wants to use it (which could be 50 times a second or not at all for minutes in typical usage), I decided to give it a job que, and have it run on its own thread, checking to see if there is any work for it in the que. However I'm not entirely sure how to make a thread that runs indefinetly with or without work.
Here's what I have so far, any critique is welcomed
private void DoWork()
{
while (true)
{
if (JobQue.Count > 0)
{
// do work on JobQue.Dequeue()
}
else
{
System.Threading.Thread.Sleep(50);
}
}
}
After thought: I was thinking I may need to kill this thread gracefully insead of letting it run forever, so I think I will add a Job type that tells the thread to end. Any thoughts on how to end a thread like this also appreciated.
You need to lock anyway, so you can Wait and Pulse:
while(true) {
SomeType item;
lock(queue) {
while(queue.Count == 0) {
Monitor.Wait(queue); // releases lock, waits for a Pulse,
// and re-acquires the lock
}
item = queue.Dequeue(); // we have the lock, and there's data
}
// process item **outside** of the lock
}
with add like:
lock(queue) {
queue.Enqueue(item);
// if the queue was empty, the worker may be waiting - wake it up
if(queue.Count == 1) { Monitor.PulseAll(queue); }
}
You might also want to look at this question, which limits the size of the queue (blocking if it is too full).
You need a synchronization primitive, like a WaitHandle (look at the static methods) . This way you can 'signal' the worker thread that there is work. It checks the queue and keeps on working until the queue is empty, at which time it waits for the mutex to signal it again.
Make one of the job items be a quit command too, so that you can signal the worker thread when it's time to exit the thread
In most cases, I've done this quite similar to how you've set up -- but not in the same language. I had the advantage of working with a data structure (in Python) which will block the thread until an item is put into the queue, negating the need for the sleep call.
If .NET provides a class like that, I'd look into using it. A thread blocking is much better than a thread spinning on sleep calls.
The job you can pass could be as simple as a "null"; if the code receives a null, it knows it's time to break out of the while and go home.
If you don't really need to have the thread exit (and just want it to keep from keeping your application running) you can set Thread.IsBackground to true and it will end when all non background threads end. Will and Marc both have good solutions for handling the queue.
Grab the Parallel Framework. It has a BlockingCollection<T> which you can use as a job queue. How you'd use it is:
Create the BlockingCollection<T> that will hold your tasks/jobs.
Create some Threads which have a never-ending loop (while(true){ // get job off the queue)
Set the threads going
Add jobs to the collection when they come available
The threads will be blocked until an item appears in the collection. Whoever's turn it is will get it (depends on the CPU). I'm using this now and it works great.
It also has the advantage of relying on MS to write that particularly nasty bit of code where multiple threads access the same resource. And whenever you can get somebody else to write that you should go for it. Assuming, of course, they have more technical/testing resources and combined experience than you.
I've implemented a background-task queue without using any kind of while loop, or pulsing, or waiting, or, indeed, touching Thread objects at all. And it seems to work. (By which I mean it's been in production environments handling thousands of tasks a day for the last 18 months without any unexpected behavior.) It's a class with two significant properties, a Queue<Task> and a BackgroundWorker. There are three significant methods, abbreviated here:
private void BackgroundWorker_DoWork(object sender, DoWorkEventArgs e)
{
if (TaskQueue.Count > 0)
{
TaskQueue[0].Execute();
}
}
private void BackgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
Task t = TaskQueue[0];
lock (TaskQueue)
{
TaskQueue.Remove(t);
}
if (TaskQueue.Count > 0 && !BackgroundWorker.IsBusy)
{
BackgroundWorker.RunWorkerAsync();
}
}
public void Enqueue(Task t)
{
lock (TaskQueue)
{
TaskQueue.Add(t);
}
if (!BackgroundWorker.IsBusy)
{
BackgroundWorker.RunWorkerAsync();
}
}
It's not that there's no waiting and pulsing. But that all happens inside the BackgroundWorker. This just wakes up whenever a task is dropped in the queue, runs until the queue is empty, and then goes back to sleep.
I am far from an expert on threading. Is there a reason to mess around with System.Threading for a problem like this if using a BackgroundWorker will do?
Related
I have a unit of work I'm doing in a thread (not the main thread). Under certain circumstances I would like to put this thread to sleep for 10 seconds. Is Thread.Sleep(10000) the most resource efficient way to do this?
Is Thread.Sleep(10000) the most resource efficient way to do this?
Yes in the sense that it is not busy-waiting but giving up the CPU.
But it is wasting a Thread. You shouldn't scale this to many sleeping threads.
As no-one else has mentioned it...
If you want another thread to be able to wake up your "sleeping" thread, you may well want to use Monitor.Wait instead of Thread.Sleep:
private readonly object sharedMonitor;
private bool shouldStop;
public void Stop()
{
lock (sharedMonitor)
{
shouldStop = true;
Monitor.Pulse(sharedMonitor);
}
}
public void Loop()
{
while (true)
{
// Do some work...
lock (sharedMonitor)
{
if (shouldStop)
{
return;
}
Monitor.Wait(sharedMonitor, 10000);
if (shouldStop)
{
return;
}
}
}
}
Note that we only access shouldStop within the lock, so there aren't any memory model concerns.
You may want to loop round waiting until you've really slept for 10 seconds, just in case you get spurious wake-ups - it depends on how important it is that you don't do the work again for another 10 seconds. (I've never knowingly encountered spurious wakes, but I believe they're possible.)
Make a habit of using Thread.CurrentThread.Join(timeout) instead of Thread.Sleep.
The difference is that Join will still do some message pumping (e.g. GUI & COM).
Most of the time it doesn't matter but it makes life easier if you ever need to use some COM or GUI object in your application.
This will process something every x seconds without using a thread
Not sure how not using your own thread compares with a task to run that is created every two seconds
public void LogProcessor()
{
if (_isRunning)
{
WriteNewLogsToDisk();
// Come back in 2 seonds
var t = Task.Run(async delegate
{
await Task.Delay(2000);
LogProcessor();
});
}
}
From resource efficiency, yes.
For design, it depends on the circumstances for the pause. You want your work to be autonomous so if the thread has to pause because it knows to wait then put the pause in the thread code using the static Thread.Sleep method. If the pause happens because of some other external event than you need to control the thread processing, then have the thread owner keep reference to the thread and call childThread.Sleep.
Yes. There's no other efficient or safe way to sleep the thread.
However, if you're doing some work in a loop, you may want to use Sleep in loop to make aborting the thread easier, in case you want to cancel your work.
Here's an example:
bool exit = false;
...
void MyThread()
{
while(!exit)
{
// do your stuff here...
stuff...
// sleep for 10 seconds
int sc = 0;
while(sc < 1000 && !exit) { Thread.Sleep(10); sc++; }
}
}
I'm kinda new to concurrent programming, and trying to understand the benefits of using Monitor.Pulse and Monitor.Wait .
MSDN's example is the following:
class MonitorSample
{
const int MAX_LOOP_TIME = 1000;
Queue m_smplQueue;
public MonitorSample()
{
m_smplQueue = new Queue();
}
public void FirstThread()
{
int counter = 0;
lock(m_smplQueue)
{
while(counter < MAX_LOOP_TIME)
{
//Wait, if the queue is busy.
Monitor.Wait(m_smplQueue);
//Push one element.
m_smplQueue.Enqueue(counter);
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
counter++;
}
}
}
public void SecondThread()
{
lock(m_smplQueue)
{
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
//Wait in the loop, while the queue is busy.
//Exit on the time-out when the first thread stops.
while(Monitor.Wait(m_smplQueue,1000))
{
//Pop the first element.
int counter = (int)m_smplQueue.Dequeue();
//Print the first element.
Console.WriteLine(counter.ToString());
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
}
}
}
//Return the number of queue elements.
public int GetQueueCount()
{
return m_smplQueue.Count;
}
static void Main(string[] args)
{
//Create the MonitorSample object.
MonitorSample test = new MonitorSample();
//Create the first thread.
Thread tFirst = new Thread(new ThreadStart(test.FirstThread));
//Create the second thread.
Thread tSecond = new Thread(new ThreadStart(test.SecondThread));
//Start threads.
tFirst.Start();
tSecond.Start();
//wait to the end of the two threads
tFirst.Join();
tSecond.Join();
//Print the number of queue elements.
Console.WriteLine("Queue Count = " + test.GetQueueCount().ToString());
}
}
and i cant see the benefit of using Wait And Pulse instead of this:
public void FirstThreadTwo()
{
int counter = 0;
while (counter < MAX_LOOP_TIME)
{
lock (m_smplQueue)
{
m_smplQueue.Enqueue(counter);
counter++;
}
}
}
public void SecondThreadTwo()
{
while (true)
{
lock (m_smplQueue)
{
int counter = (int)m_smplQueue.Dequeue();
Console.WriteLine(counter.ToString());
}
}
}
Any help is most appreciated.
Thanks
To describe "advantages", a key question is "over what?". If you mean "in preference to a hot-loop", well, CPU utilization is obvious. If you mean "in preference to a sleep/retry loop" - you can get much faster response (Pulse doesn't need to wait as long) and use lower CPU (you haven't woken up 2000 times unnecessarily).
Generally, though, people mean "in preference to Mutex etc".
I tend to use these extensively, even in preference to mutex, reset-events, etc; reasons:
they are simple, and cover most of the scenarios I need
they are relatively cheap, since they don't need to go all the way to OS handles (unlike Mutex etc, which is owned by the OS)
I'm generally already using lock to handle synchronization, so chances are good that I already have a lock when I need to wait for something
it achieves my normal aim - allowing 2 threads to signal completion to each-other in a managed way
I rarely need the other features of Mutex etc (such as being inter-process)
There is a serious flaw in your snippet, SecondThreadTwo() will fail badly when it tries to call Dequeue() on an empty queue. You probably got it to work by having FirstThreadTwo() executed a fraction of a second before the consumer thread, probably by starting it first. That's an accident, one that will stop working after running these threads for a while or starting them with a different machine load. This can accidentally work error free for quite a while, very hard to diagnose the occasional failure.
There is no way to write a locking algorithm that blocks the consumer until the queue becomes non-empty with just the lock statement. A busy loop that constantly enters and exits the lock works but is a very poor substitute.
Writing this kind of code is best left to the threading gurus, it is very hard to prove it works in all cases. Not just absence of failure modes like this one or threading races. But also general fitness of the algorithm that avoids deadlock, livelock and thread convoys. In the .NET world, the gurus are Jeffrey Richter and Joe Duffy. They eat locking designs for breakfast, both in their books and their blogs and magazine articles. Stealing their code is expected and accepted. And partly entered into the .NET framework with the additions in the System.Collections.Concurrent namespace.
It is a performance improvement to use Monitor.Pulse/Wait, as you have guessed. It is a relatively expensive operation to acquire a lock. By using Monitor.Wait, your thread will sleep until some other thread wakes your thread up with `Monitor.Pulse'.
You'll see the difference in TaskManager because one processor core will be pegged even while nothing is in the queue.
The advantages of Pulse and Wait are that they can be used as building blocks for all other synchronization mechanisms including mutexes, events, barriers, etc. There are things that can be done with Pulse and Wait that cannot be done with any other synchronization mechanism in the BCL.
All of the interesting stuff happens inside the Wait method. Wait will exit the critical section and put the thread in the WaitSleepJoin state by placing it in the waiting queue. Once Pulse is called then the next thread in the waiting queue moves to the ready queue. Once the thread switches to the Running state it reenters the critical section. This is important to repeat another way. Wait will release the lock and reacquire it in an atomic fashion. No other synchronization mechanism has this feature.
The best way to envision this is to try to replicate the behavior with some other strategy and then see what can go wrong. Let us try this excerise with a ManualResetEvent since the Set and WaitOne methods seem like they may be analogous. Our first attempt might look like this.
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
while (!CheckSomeCondition())
{
mre.WaitOne();
}
// Do stuff.
}
}
It should be easy to see that the code will can deadlock. So what happens if we try this naive fix?
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
}
while (!CheckSomeCondition())
{
mre.WaitOne();
}
lock (mre)
{
// Do stuff.
}
}
Can you see what can go wrong here? Since we did not atomically reenter the lock after the wait condition was checked another thread could get in and invalidate the condition. In other words, another thread could do something that causes CheckSomeCondition to start returning false again before the following lock was reacquired. That can definitely cause a lot of weird problems if your second block of code required that the condition be true.
I am implementing a very basic thread in C#:
private Thread listenThread;
public void startParser()
{
this.listenThread = new Thread(new ThreadStart(checkingData));
this.listenThread.IsBackground = true;
this.listenThread.Start();
}
private void checkingData()
{
while (true)
{
}
}
Then I immediately get 100% CPU. I want to check if sensor data is read inside the while(true) loop. Why it is like this?
Thanks in advance.
while (true) is what killing your CPU.
You can add Thread.Sleep(X) to you while to give CPU some rest before checking again.
Also, seems like you actually need a Timer.
Look at one of the Timer classes here http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx.
Use Timer with as high pulling interval as you can afford, 1 sec, half a sec.
You need to tradeoff between CPU usage and the maximum delay you can afford between checks.
Let your loop sleep. It's running around and around and getting tired. At the very least, let it take a break eventually.
Because your function isn't doing anything inside the while block, it grabs the CPU, and, for all practical purposes, never lets go of it, so other threads can do their work
private void checkingData()
{
while (true)
{
// executes, immediately
}
}
If you change it to the following, you should see more reasonable CPU consumption:
private void checkingData()
{
while (true)
{
// read your sensor data
Thread.Sleep(1000);
}
}
you can use blocking queue. take a item from blocking queue will block the thread until there is a item put into the queue. that doesn't cost any cpu.
with .net4, you can use BlockingCollection http://msdn.microsoft.com/en-us/library/dd267312.aspx
under version 4, there is not blocking queue int .net framework.
you can find many implements of blocking queue if you google it.
here is a implementation
http://www.codeproject.com/KB/recipes/boundedblockingqueue.aspx
by the way. where does the data you wait come from?
EDIT
if you want to check file. you can use FileSystemWatcher to check it with thread block.
if your data comes from external API and the api doesn't block the thread, there is no way to block the thread except use Thread.Sleep
If you're polling for a condition, definitely do as others suggested and put in a sleep. I'd also add that if you need maximum performance, you can use a statistical trick to avoid sleeping when sensor data has been read. When you detect sensor data is idle, say, 10 times in a row, then start to sleep on each iteration again.
I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.
I need to do a sort of "timeout" or pause in my method for 10 seconds (10000 milliseconds), but I'm not sure if the following would work as i do not have multi-threading.
Thread.Sleep(10000);
I will try to use that current code, but I would appreciate if someone could explain the best and correct way of doing this, especially if the above code does not work properly. Thanks!
UPDATE: This program is actually a console application that in the function in question is doing many HTTPWebRequests to one server, so I wish to delay them for a specified amount of milliseconds. Thus, no callback is required - all that is needed is an "unconditional pause" - basically just the whole thing stops for 10 seconds and then keeps going. I'm pleased that C# still considers this as a thread, so Thread.Sleep(...) would work. Thanks everybody!
You may not have multi-threading, but you're still executing within a thread: all code executes in a thread.
Calling Thread.Sleep will indeed pause the current thread. Do you really want it to unconditionally pause for 10 seconds, or do you want to be able to be "woken up" by something else happening? If you're only actually using one thread, calling Sleep may well be the best way forward, but it will depend on the situation.
In particular, if you're writing a GUI app you don't want to use Thread.Sleep from the UI thread, as otherwise your whole app will become unresponsive for 10 seconds.
If you could give more information about your application, that would help us to advise you better.
Thread.Sleep is fine, and AFAIK the proper way. Even if you are not Multithreaded: There is always at least one Thread, and if you send that to sleep, it sleeps.
Another (bad) way is a spinlock, something like:
// Do never ever use this
private void DoNothing(){ }
private void KillCPU()
{
DateTime target = DateTime.Now.AddSeconds(10);
while(DateTime.Now < target) DoNothing();
DoStuffAfterWaiting10Seconds();
}
This is sadly still being used by people and while it will halt your program for 10 seconds, it will run at 100% CPU Utilization (Well, on Multi-Core systems it's one core).
That will indeed pause the executing thread/method for 10 seconds. Are you seeing a specific problem?
Note that you shouldn't Sleep the UI thread - it would be better to do a callback instead.
Note also that there are other ways of blocking a thread that allow simpler access to get it going again (if you find it is OK after 2s); such as Monitor.Wait(obj, 10000) (allowing another thread to Pulse if needed to wake it up):
static void Main() {
object lockObj = new object();
lock (lockObj) {
new Thread(GetInput).Start(lockObj);
Monitor.Wait(lockObj, 10000);
}
Console.WriteLine("Main exiting");
}
static void GetInput(object state) {
Console.WriteLine("press return...");
string s = Console.ReadLine();
lock (state) {
Monitor.Pulse(state);
}
Console.WriteLine("GetInput exiting");
}
You can do this with Thread.Interrupt too, but IMO that is messier.
You could use a separate thread to do it:
ThreadPool.QueueUserWorkItem(
delegate(object state)
{
Thread.Sleep(1000);
Console.WriteLine("done");
});
But, if this is a Windows Forms app, you will need to invoke the code after the delay from the Gui thread (this article, for example: How to update the GUI from another thread in C#?).
[Edit] Just saw your update. If it's a console app, then this will work. But if you haven't used multiple threads so far, then you need to be aware that this code will be executed in a different thread, which means you will have to take care about thread synchronization issues.
If you don't need background workers, stick to "keeping it simple".
Here is a pause class that will pause for the desired milliseconds and wont consume your CPU resources.
public class PauseClass
{
//(C) Michael Roberg
//Please feel free to distribute this class but include my credentials.
System.Timers.Timer pauseTimer = null;
public void BreakPause()
{
if (pauseTimer != null)
{
pauseTimer.Stop();
pauseTimer.Enabled = false;
}
}
public bool Pause(int miliseconds)
{
ThreadPriority CurrentPriority = Thread.CurrentThread.Priority;
if (miliseconds > 0)
{
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
pauseTimer = new System.Timers.Timer();
pauseTimer.Elapsed += new ElapsedEventHandler(pauseTimer_Elapsed);
pauseTimer.Interval = miliseconds;
pauseTimer.Enabled = true;
while (pauseTimer.Enabled)
{
Thread.Sleep(10);
Application.DoEvents();
//pausThread.Sleep(1);
}
pauseTimer.Elapsed -= new ElapsedEventHandler(pauseTimer_Elapsed);
}
Thread.CurrentThread.Priority = CurrentPriority;
return true;
}
private void pauseTimer_Elapsed(object sender, ElapsedEventArgs e)
{
pauseTimer.Enabled = false;
}
}
Yes, that works just fine.
You don't have to have multiple threads to make use of some of the methods in the Thread class. You always have at least one thread.
For a timeout, you should have a static volatile boolean isRunning class field. When the new thread starts, the isRunning must become true, and at the end must become false.
The main thread should have a method that loops for the isRunning during the timeout you define. When the timeout ends, you should implement the logic. But, never use the abort thread method.
A pause... there isn't a straightforward solution. It depends on what you are doing inside the thread. However, you could look at Monitor.Wait.
If you can have an async method, you can do something like to pause the function at a certain location. Once pause is set false it will continue executing the rest of the code in the method. Since this is an async method and delay is async too UI execution wouldn't be affected.
* Please note that asyn is supported only in .net 4.5 and higher.
bool pause = true;
void async foo()
{
//some code
while (pause)
{
await Task.Delay(100);
}
//some code
}