Purpose of Thread.Sleep(1)? - c#

I was reading over some threading basics and on the msdn website I found this snippet of code.
// Put the main thread to sleep for 1 millisecond to
// allow the worker thread to do some work:
Thread.Sleep(1);
Here is a link to the the page: http://msdn.microsoft.com/en-us/library/7a2f3ay4(v=vs.80).aspx.
Why does the main thread have sleep for 1 millisecond? Will the secondary thread not start its tasks if the main thread is continuously running? Or is the example meant for a task that takes 1 millisecond to do? As in if the task generally takes 5 seconds to complete the main thread should sleep for 5000 milliseconds?
If this is solely regarding CPU usage, here is a similar Question about Thread.Sleep.
Any comments would be appreciated.
Thanks.

The 1 in that code is not terribly special; it will always end up sleeping longer than that, as things aren't so precise, and giving up your time slice does not equal any guarantee from the OS when you will get it back.
The purpose of the time parameter in Thread.Sleep() is that your thread will yield for at least that amount of time, roughly.
So that code is just explicitly giving up its time slot. Generally speaking, such a bit of code should not be needed, as the OS will manage your threads for you, preemptively interrupting them to work on other threads.
This kind of code is often used in "threading examples", where the writer wants to force some artificial occurrence to prove some race condition, or the like (that appears to be the case in your example)
As noted in Jon Hanna's answer to this same question, there is a subtle but important difference between Sleep(0) and Sleep(1) (or any other non-zero number), and as ChrisF alludes to, this can be important in some threading situations.
Both of those involve thread priorities; Threads can be given higher/lower priorities, such that lower priority threads will never execute as long as there are higher priority threads that have any work to do. In such a case, Sleep(1) can be required... However...
Low-priority threads are also subject to what other processes are doing on the same system; so while your process might have no higher-priority threads running, if any others do, yours still won't run.
This isn't usually something you ever need to worry about, though; the default priority is the 'normal' priority, and under most circumstances, you should not change it. Raising or lowering it has numerous implications.

Thread.Sleep(0) will give up the rest of a thread's time-slice if a thread of equal priority is ready to schedule.
Thread.Sleep(1) (or any other value, but 1 is the lowest to have this effect) will give up the rest of the thread's time-slice unconditionally. If it wants to make sure that even threads with lower priority have a chance to run (and such a thread could be doing something that is blocking this thread, it has to), then it's the one to go for.
http://www.bluebytesoftware.com/blog/PermaLink,guid,1c013d42-c983-4102-9233-ca54b8f3d1a1.aspx has more on this.

If the main thread doesn't sleep at all then the other threads will not be able to run at all.
Inserting a Sleep of any length allows the other threads some processing time. Using a small value (of 1 millisecond in this case) means that the main thread doesn't appear to lock up. You can use Sleep(0), but as Jon Hanna points out that has a different meaning to Sleep(1) (or indeed any positive value) as it only allows threads of equal priority to run.
If the task takes 5 seconds then the main thread will sleep for a total of 5,000 milliseconds, but spread out over a longer period.

It's only for the sake of the example- they want to make sure that the worker thread has the chance to print "worker thread: working..." at least once before the main thread kills it.
As Andrew implied, this is important in the example especially because if you were running on a single-processor machine, the main thread may not give up the processor, killing the background thread before it has a chance to iterate even once.

Interesting thing I noticed today. Interrupting a thread throws a ThreadInterruptedException. I was trying to catch the exception but could not for some reason. My coworker recommended that I put Thread.Sleep(1) prior to the catch statement and that allowed me to catch the ThreadInterruptedException.
// Start the listener
tcpListener_ = new TcpListener(ipAddress[0], int.Parse(portNumber_));
tcpListener_.Start();
try
{
// Wait for client connection
while (true)
{
// Wait for the new connection from the client
if (tcpListener_.Pending())
{
socket_ = tcpListener_.AcceptSocket();
changeState(InstrumentState.Connected);
readSocket();
}
Thread.Sleep(1);
}
}
catch (ThreadInterruptedException) { }
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Contineo", MessageBoxButtons.OK, MessageBoxIcon.Error);
Console.WriteLine(ex.StackTrace);
}
Some other class...
if (instrumentThread_ != null)
{
instrumentThread_.Interrupt();
instrumentThread_ = null;
}

Related

Force thread not to give back CPU until a part is finished

Consider two threads run simultaneously. A is reading and B is writing. When A is reading, in the middle of code ,CPU time for A finishes then B thread continues.
Is there any way to don't give back CPU until A finishes, but B can start or continue?
You need to understand that you have almost no control over when CPU is given back and to whom it is given. The operating system does that. To have control on that, you'd need to be the operating system. The only things you can usually do are:
start a thread
set thread priority, so some threads are may more likely get time than others
put a thread to sleep, immediatelly and ask the operating system to wake it up upon some condition, maybe with some timeout (waiting time limit)
as a special case, or a typical use case, the second point is often also provided with a shorthand:
put a thread to sleep, immediatelly for a specified amount of time
By "sleep" I mean that this thread is paused and will not get any CPU time, even if all CPUs are idle, unless the thread is woken up by the OS due to some condition.
Furthermore, in a typical case, there is no "thread A and thread B that switch CPU time between them", but there is "lots of threads from various processes and the operating system itself, and you two threads". This means that when your thread A loses the CPU, most probably it will not be the thread B that gets the time now. Some other thread from somewhere else will get it, and at some future point of time, maybe your thread A or maybe thread B will get it back.
This means that there is very little you can be sure. You can be sure that your threads are
either dead
or sleeping
or proceeding 'forward' in a hard to determine order
If you need to ensure that some threads are synchronized, you must .. not start them simultaneously, or put them sleep in precise moments and wake them up in precise order.
You've just said in comments:
You know, if in the middle of A CPU time finishes, data that has been retrieved is not complete
This means that you need to ensure that thread B does not try to touch the data before thread A finishes writing it. But also, if you think about it, you need to ensure that thread A doesn't start writing next data if the thread B is now reading previous data.
This means synchronization. This means that threads A and B must wait if the other thread is touching the data. This means that they need to be put to sleep and woken up when the other thread finishes.
In C#, the easiest way to do that is to use lock(x) keyword. When a thread enters a lock() section, it proceeds only if it is able to get the lock. If not, it is put to sleep. It can't get the lock if any other thread was faster and got it before. However, a thread releases the lock when it ends its job. Upon that time, one of the sleeping threads is woken up and given the lock.
lock(fooo) { // <- this line means 'acquire the lock or sleep'
iam.doing(myjob);
very.important(things);
thatshouldnt.be.interrupted();
byother(threads);
} // <- this line means 'release the lock'
So, when a thread gets through the lock(fooo){ line, you can't be sure it won't be interrupted. Oh, surely it will be. OS will switch the threads back and forth to other processes, and so on. But you can be sure that no other threads of your app will be inside the code block. If they tried to get inside while your thread got that lock, they'd imediatelly fall asleep in the first lock line. One of them be will be later woken up when your thread gets out of that code.
There's one more thing. lock() keyword requires a parameter. I wrote foo there. You need to pass there something that will act as the lock. It can be any object, even plain object:
private object thelock = new object();
private void dosomething()
{
lock(thelock)
{
foobarize(thebaz);
}
}
however you must ensure that all threads try to use the same lock instance. Writing a code like
private void dosomething()
{
object thelock = new object();
lock(thelock)
{
foobarize(thebaz);
}
}
is a nonsense since every potential thread executing that lines will try lockin upon their own new object instance and will see it as "free" (it's new, just created, noone took it earlier) and will immediatelly get into the protected code block.
Now you wrote about using ConcurrentQueue. This class provides safely mechanisms against concurrency. You can be sure that adding or reading or removing items from that queue is already safe. This collection makes it safe. You don't need to add synchronization to add or remove items safely. It's safe. If you observe any ill effects, then most probably you have tried putting an item into that collection and then you were modifying that item. Concurrent collection will not guard you against that. It can only make sure that add/remove/etc are safe. But it has no knowledge or control on what you do to the items:
In short, if some thread B tries to read items from the collection, then in thread A this is NOT safe:
concurrentcoll.Add(item);
item.x = 5;
item.foobarize();
but this is safe:
item.x = 5;
item.foobarize();
concurrentcoll.Add(item);
// and do not touch the Item anymore here.

Delay and wake up a task from being executed? (Task.Delay?)

I have a situation where I have multiple threads being executed at once. In some cases these threads will be put in a while() loop for an unknown amount of time, and if a certain number of threads get caught in this loop then eventually the scheduler stops letting other threads be executed.
I was wondering if there is some way I could delay a thread from being executed (remove it from the scheduled list) and then let other threads in. Is it then possible to wake up that thread later by a threadID or something like that?
I am reading about Task.Delay and see it suspends execution from a timespan and that it is possible to time something out for an infinite amount of time, but is it possible to time it out indefinitely UNTIL a event occurs and then undelay it by some name or ID?
Edit: I thought this question was one that was harder to post code for, but more or less I have a situation where requests come in and are put into a loop like:
while(true){
//check for something that could make me want to delete this thread/request
//do some things
}
I had noticed that when I sent large number of requests that I never stopped ended up still in his loop (which I understand), but it seems the max amount of threads that could be doing this is 16/32 (depends on my computer that I run it on) and it is stopping other requests from being scheduled to run.
I wanted to know if inside the while() loop I could do something like this:
while(true){
//put this thread to sleep
//do some things that
//call some function to wake up the specific thread I need to do work on, before I put it back to sleep
}
The difference in this now is that instead of 16/32 threads running I can have 1 "king thread" that enters this while() loop that can 'do some things' and then wake up the thread that needs to be affected by the 'things'. Is there a way to sleep and wake up a specific thread so that other threads can be scheduled to run?
From the question I guess that you are running a busy waiting loop. That's pretty bad as you found out.
Make the loop wait for an event:
while (true) {
WaitForEvent();
DoWork();
}
This requires cooperation from the thread (or component) that makes the event happen. You could use a ManualResetEvent or a TaskCompletionSource to make this coordination happen.
I can't really be more specific because the question is not particularly concrete about the scenario. I hope this pushes you in the right direction.

Is it Really Busy Waiting If I Thread.Sleep()?

My question is a bit nit-picky on definitions:
Can the code below be described as "busy waiting"? Despite the fact that it uses Thread.Sleep() to allow for context switching?
while (true) {
if (work_is_ready){
doWork();
}
Thread.Sleep(A_FEW_MILLISECONDS);
}
PS - The current definition for busy waiting in Wikipedia suggests that it is a "less wasteful" form of busy waiting.
Any polling loop, regardless of the time between polling operations, is a busy wait. Granted, sleeping a few milliseconds is a lot less "busy" than no sleep at all, but it still involves processing: thread context switches and some minimal condition checking.
A non-busy wait is a blocking call. The non-busy version of your example would involve waiting on a synchronization primitive such as an event or a condition variable. For example, this pseudocode:
// initialize an event to be set when work is ready
Event word_is_ready;
work_is_ready.Reset();
// in code that processes work items
while (true)
{
work_is_ready.Wait(); // non-busy wait for work item
do_work();
}
The difference here is that there is no periodic polling. The Wait call blocks and the thread is never scheduled until the event is set.
That's not busy waiting. Busy waiting, or spinning, involves the opposite: avoiding context switching.
If you want to allow other threads to run, if and only if other threads are ready to run, to avoid deadlock scenarios in single threaded CPUs (e.g., the current thread needs work_is_ready to be set to true, but if this thread doesn't give up the processor and lets others run, it will never be set to true), you can use Thread.Sleep(0).
A much better option would be to use SpinWait.SpinUntil
SpinWait.SpinUntil(() => work_is_ready);
doWork();
SpinWait emits a special rep; nop (repeat no-op) or pause instruction that lets the processor know you're busy waiting, and is optimized for HyperThreading CPUs.
Also, in single core CPUs, this will yield the processor immediately (because busy waiting is completely useless if there's only one core).
But spinning is only useful if you're absolutely sure you won't be waiting on a condition for longer than it would take the processor to switch the context out and back in again. I.e., no more than a few microseconds.
If you want to poll for a condition every few milliseconds, then you should use a blocking synchronization primitive, as the wiki page suggests. For your scenario, I'd recommend an AutoResetEvent, which blocks the thread upon calling WaitOne until the event has been signaled (i.e, the condition has become true).
Read also: Overview of Synchronization Primitives
It depends on the operating system and the exact number of milliseconds you are sleeping. If the sleep is sufficiently long that the operating system can switch to another task, populate its caches, and usefully run that task until your task is ready-to-run again, then it's not busy waiting. If not, then it is.
To criticize this code, I would say something like this: "This code may busy wait if the sleep is too small to allow the core to do useful work between checks. It should be changed so that the code that makes this code need to do work triggers that response."
This poor design creates a needless design problem -- how long should the sleep be? If it's too short, you busy wait. If it's too long, the work sits undone. Even if it's long enough that you don't busy wait, you force needless context switches.
When your code is sleeping for a moment, technically it will be in sleep state freeing up a CPU. While in busy waiting your code is holding the CPU until condition is met.
Can the code below be described as "busy waiting"? Despite the fact that it uses Thread.Sleep() to allow for context switching?
It is not busy waiting, rather polling which is more performant that busy waiting. There is a difference between both
Simply put, Busy-waiting is blocking where as Polling is non-blocking.
Busy waiting is something like this:
for(;;) {
if (condition) {
break;
}
}
The condition could be "checking the current time" (for example performance counter polling). With this you can get a very accurate pause in your thread. This is useful for example for low level I/O (toggling GPIOs etc.). Because of this your thread is running all the time, and if you are on cooperative multi threading, the you are fully in control, how long the thread will stay in that wait for you. Usually this kind of threads have a high priority set and are uninterruptible.
Now a non-busy waiting means, the thread is non-busy. It allows another threads to execute, so there is a context switch. To allow a context switch, in most languages and OS you can simply use a sleep(). There are another similar functions, like yield(), wait(), select(), etc. It depends on OS and language, if they are non-busy or busy implemented. But in my experience in all cases a sleep > 0 was always non-busy.
Advantage of non-busy waiting is allowing another threads to run, which includes idle threads. With this your CPU can go into power saving mode, clock down, etc. It can also run another tasks. After the specified time the scheduler tries to go back to your thread. But is is just a try. It is not exact and it may be a little bit longer, than your sleep defines.
I think. This is clear now.
And now the big question: Is this busy, or non-busy waiting:
for(;;) {
if (condition) {
break;
}
sleep(1);
}
The answer is: is is a non-busy waiting. sleep(1) allows the thread to perform a context-switch.
Now the next question: Is the second for() busy, or non-busy waiting:
function wait() {
for(;;) {
if (condition) {
break;
}
}
}
for(;;) {
wait();
if (condition) {
break;
}
sleep(1);
}
It is hard to say. It depends on the real execution time of the wait() function. If it does nothing, then the CPU is almost the entire time in sleep(1). And this would be a non-blocking for-loop. But if wait() is a heavy calculation function without allowing a thread context switch, then this whole for-loop may become a blocking function, even if there is a sleep(1). Think of the worst-case: the wait() function is never returning back to caller, because the condition isn't hit for a long time.
This here is hard to answer, because we don't know the conditions. You can imagine the problem, where you cannot answer the question, because you don't know the conditions, in the following way:
if (unkonwnCondition) {
for(;;) {
if (condition) {
break;
}
}
} else {
for(;;) {
if (condition) {
break;
}
sleep(1);
}
}
As you see, its the same: because you don't know the conditions, you cannot say if the wait is busy or non-busy.

Is Thread.Sleep(1) special?

Joe Duffy (author of Concurrent Programming on Windows) writes in this blog article that Thread.Sleep(1) is preferred over Thread.Sleep(0) because it will suspend for same and lower priority threads, not just equal priority threads as for Thread.Sleep(0).
The .NET version of MSDN says that Thread.Sleep(0) is special, it will suspend this thread and allow other waiting threads to execute. But it says nothing about Thread.Sleep(1) (for any .NET version).
So, is Thread.Sleep(1) actually doing anything special?
Background:
I'm refreshing my knowledge of concurrent programming. I wrote some C# code to visibly show that pre/post increments and decrements are non-atomic and therefore not thread-safe.
To avoid needing to create hundreds of threads I place a Thread.Sleep(0) after incrementing a shared variable to force the scheduler to run another thread. This regular swapping of threads makes the non-atomic nature of pre/post increment/decrement more obvious.
Thread.Sleep(0) appears to causes no additional delay, as expected. However if I change this to Thread.Sleep(1), it appears to revert to normal sleep behaviour (eg. I get roughly a minimum of 1ms delay).
This would mean that while Thread.Sleep(1) may be preferred, any code that uses it in a loop would run much slower.
This SO question "Could someone explain this interesting behaviour with Sleep(1)?" is sort of relevant, but it is C++ focused and just repeats the guidance in Joe Duffy's blog article.
Here's my code for anyone interested (copied from LinqPad, so you may need to add a class around it):
int x = 0;
void Main()
{
List<Thread> threadList=new List<Thread>();
Stopwatch sw=new Stopwatch();
for(int i=0; i<20; i++)
{
threadList.Add(new Thread(Go));
threadList[i].Priority=ThreadPriority.Lowest;
}
sw.Start();
foreach (Thread thread in threadList)
{
thread.Start();
}
foreach (Thread thread in threadList)
{
thread.Join();
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
Thread.Sleep(200);
Console.WriteLine(x);
}
void Go()
{
for(int i=0;i<10000;i++)
{
x++;
Thread.Sleep(0);
}
}
You no longer need to use Sleep(1) instead of Sleep(0) because Microsoft changed the implementation of the Windows API Sleep().
From the MSDN documentation for Sleep(), this is what happens now with Sleep(0):
A value of zero causes the thread to relinquish the remainder of its time slice to any other thread that is ready to run. If there are no other threads ready to run, the function returns immediately, and the thread continues execution.
This is what used to happen in Windows XP:
A value of zero causes the thread to relinquish the remainder of its time slice to any other thread of equal priority that is ready to run. If there are no other threads of equal priority ready to run, the function returns immediately, and the thread continues execution. This behavior changed starting with Windows Server 2003.
Note the difference between "any other thread" and "any other thread of equal priority".
The only reason that Joe Duffy suggests using Sleep(1) rather than Sleep(0) is because it is the shortest Sleep() value that will prevent the Sleep() from returning immediately if there are no other threads of equal priority ready to run, when running on Windows XP.
You don't need to worry about this for OS versions after Windows Server 2003 because of the change in behaviour of Sleep().
I draw your attention to this part of Joe's blog:
And even though there's an explicit Sleep in there, issuing it doesn't allow the producer to be scheduled because it's at a lower priority.
In XP, lower priority threads would be starved even if the main thread (of higher priority) did Sleep(0). Post-XP, this will no longer happen because Sleep(0) will allow the lower priority threads to run.

Possible Race condition with ManualResetEvent

Problem:
I am trying to throw 6 threads from ThreadPool to work on individual tasks. Each task's ManualResetEvent is stored in a array of manual reset event. Number of thread corresponds to the index in the ManualResetEvent Array.
Now what happens is that once I have initiated these 6 threads I move out and wait for the threads to complete. Waiting for the thread is done in the main thread.
Now some times what happens is that my waiting logic doesn't return even after a long time (2 days that I have seen). Here is the code sample for thread wait logic
foreach (ManualResetEvent whandle in eventList)
{
try
{
whandle.WaitOne();
}
catch (Exception) { }
}
As per documentation of .WaitOne. It is sync call which makes the thread to not return if Set event is not received from the thread.
Sometimes my threads have less amount of work and they may even return before I reach the Wait logic. Is it possible that .WaitOne() will wait for the Set() event even if it was received in the past?
Is this a correct logic to wait for the all the threads to close?
I'm not directly answering this question. Here is what you should do:
Start tasks using Task.Factory.StartNew and use Task.WaitAll(Task[]) to wait for them. You do not have to deal with events that way. Exceptions will nicely propagate to the "forking" thread. You don't need the old ThreadPool API anymore.
Hope this helps.
(Note: I think your best bet is Parallel.Invoke() - see later in this answer.)
What you are doing will normally work fine, so the problem is likely to be that one of your threads is blocking for some reason.
You should be able to debug this readily enough - you can attach the debugger and break into the program and then look at the call stack to see which thread(s) are blocked. Be prepared for some head-scratching if you discover a race condition though!
Another thing to be aware of that you can't do the following:
myEvent.Set();
myEvent.Reset();
with nothing (or very little) between the .Set() and the .Reset(). If you do that when several threads are waiting on myEvent, some of them will miss the event being set! (This effect is not well documented on MSDN.)
By the way, you shouldn't ignore exceptions - always log them in some way, at the very least.
(This section doesn't answer the question, but it may provide some helpful information)
I also want to mention an alternative way to wait for the threads. Since you have a set of ManualResetEvents, you can copy them to a plain array and pass it to WaitHandle.WaitAll().
Your code could look a little like this:
WaitHandle.WaitAll(eventList.ToArray());
Another approach to waiting for all threads to finish is to use a CountdownEvent. It becomes signalled when a countdown reaches zero; you start the count at the number of threads, and each thread signals it when it exits. There's an example here.
Parallel.Invoke()
If your threads do not return values, and all you want to to is to launch them and then have the launching thread wait for them to exit, then I think Parallel.Invoke() will be the best way of all. It avoids you having to handle the synchronization yourself.
(Otherwise, as svick says in the comments above, use Task rather than the old thread classes.)

Categories

Resources