i got a example of CountdownEvent usage but when i go through the sample code then i just do not understand what it is doing and how CountdownEvent's Signal() and AddCount() help to synchronization for multiple thread.
here is the sample. please some one help me to understand how synchronization is working for multiple thread in this example where Signal() and AddCount() used.
class Program
{
static void Main(string[] args)
{
using (CountdownEvent e = new CountdownEvent(1))
{
// fork work:
for (int i = 1; i <= 5;i++ )
{
// Dynamically increment signal count.
TaskInfo ti = new TaskInfo("Current Thread ", i);
Console.WriteLine("Running thread " + e.CurrentCount);
e.AddCount();
ThreadPool.QueueUserWorkItem(delegate(object state)
{
try
{
//ProcessData(state);
TaskInfo inner_ti = (TaskInfo)state;
//Console.WriteLine(inner_ti.Boilerplate + inner_ti.Value);
Thread.Sleep(2000);
}
finally
{
Console.WriteLine("Signal thread " + e.CurrentCount);
e.Signal();
}
},
ti);
}
Console.WriteLine("Outer Signal thread " + e.CurrentCount);
e.Signal();
// The first element could be run on this thread.
// Join with work.
Console.WriteLine("Wait thread ");
e.Wait();
Console.WriteLine("ReadLine..... ");
Console.ReadLine();
}
}
}
public class TaskInfo
{
// State information for the task. These members
// can be implemented as read-only properties, read/write
// properties with validation, and so on, as required.
public string Boilerplate;
public int Value;
// Public constructor provides an easy way to supply all
// the information needed for the task.
public TaskInfo(string text, int number)
{
Boilerplate = text;
Value = number;
}
}
just guide me with small sample code like how Signal() and AddCount() is used in real life scenario for thread synchronization. thanks
The job of CountdownEvent is to provide a waitable object (i.e. an object that will block the current thread on request until some condition is satisfied), where the condition that needs to be satisfied is for the object's internal counter to reach the value of 0.
The code you've shown will initialize the CountdownEvent object with a count of 1. This value represents the main thread itself; the main thread will call Signal() later, indicating that it's completed its own work (which is to start five other threads).
For each new task that is created, the main thread increments — by calling the AddCount() method — the CountdownEvent object's counter by one before starting that new task (in this case, by queuing the task to the global thread pool). Each task, as it completes, will then decrement the object's counter by calling the Signal() method.
So initially, the code is repeatedly increasing the counter, from its initial value of 1, to its maximum value of 6.
Immediately after the tasks have been queued, the main thread decrements the counter to 5. Each task, as it completes, decrements the counter again. Five tasks means decrementing the counter five times, so when the last task completes, the counter will reach 0.
Remember: the point of the CountdownEvent object is to release a thread that is waiting on it when its internal counter reaches 0. (Or more specifically, for the waitable object to be set to its signaled, non-blocking state).
The main thread calls the Wait() method on the CountdownEvent object, which initially causes the main thread to be blocked (i.e. to wait). It will continue to wait until the CountdownEvent is set to the non-blocking state, which occurs when its internal counter reaches 0, which occurs when the last task completes.
Thus, the main thread waits until the last task completes.
just guide me with small sample code like how Signal() and AddCount() is used in real life scenario for thread synchronization
The code example you've posted here seems "real" enough. My description above explains how the code example works. You would use the CountdownEvent object in any scenario where you have some specific count of operations, tasks, events, etc. that should occur before some specific thread should continue after waiting.
Of course, not all synchronization scenarios involve this kind of requirement. There are other, different synchronization mechanisms that can be used for those other scenarios. The CountdownEvent is specifically for those scenarios where the waiting of a thread is unblocked by the completion of a countdown, hence the name of the class.
Related
I'm reading a book on threads, below is the quote from the book:
When a thread calls the Wait method, the system checks if the Task that the
thread is waiting for has started executing. If it has, then the thread calling Wait will block
until the Task has completed running. But if the Task has not started executing yet, then
the system may (depending on the TaskScheduler) execute the Task by using the thread
that called Wait. If this happens, then the thread calling Wait does not block; it executes
the Task and returns immediately
Let's say we have the following code:
static void Main(string[] args) {
// Create a Task (it does not start running now)
Task<Int32> t = new Task<Int32>(n => Sum((Int32)n), 10);
t.Start();
t.Wait();
Console.WriteLine("The Sum is: " + t.Result);
}
static Int32 Sum(Int32 n) {
Int32 sum = 0;
for (; n > 0; n--)
{ sum += n; }
return sum;
}
My understanding is, t.Start(); means CLR will schedule it for execution so later a worker thread from threadpool can execute it. So let's say the main thread has reached to t.Wait(); while the task is still not executed by any worker thread (task has been scheduled but hasn't been picked up by the thread pool's worker thread).
Then main thread will actually execute the Sum method, is my understanding correct? So if I run the program a thousand of times, 99.99% Sum method will be executed by other worker thread, but 0.01% it will be executed by main thread. Currently no matter how many time I run the program it is always a woker thread to execute Sum method, this is just because the task always start to execute quicke before the main thread executes t.Wait();, if sth usunal happen, there is still a chance (0.01%) that the main thread can execute Sum method, is my understanding correct?
if [the unusual happens], there is still a chance (0.01%) that the main thread can execute Sum method, is my understanding correct?
The answer is possibly!
This involves some serious deep-diving into the specific implementation details of how the TaskScheduler handles tasks that are queued and how the compiler and subsequent smoke and mirrors that run in the back-end of the MSDN language have implemented the state-machines that handle Tasks and async operations.
But in the basic sense you're conclusion, in my opinion, is correct! Per MSDN, if you are using the default TaskScheduler, the TaskScheduler implements certain optimizations on your behalf such as Task Inlining, like you described, and other optimizations like Work Stealing.
If work stealing and task inlining are not desirable, consider specifying a synchronization context of the tasks that you are creating, to prevent them from performing work on threads other than the ones you specify.
I wrote this little program:
class Program
{
static void Main(string[] args)
{
Thread t = new Thread(WriteX);
t.Start();
for (int i = 0; i < 1000; i++)
{
Console.Write("O");
}
}
private static void WriteX()
{
for (int i = 0; i < 1000; i++)
{
Console.Write(".");
}
}
}
I ran it about fifty times, and the first character on the console was always "O". It is weird for me, because the t thread starts first then the main continues.
Is there any explanation for this?
This is probably because Thread.Start first causes the change of state of thread on which it is called and OS schedules it for execution whereas the main thread is already running and does not need these two steps. This is probably the reason that the statement in main thread executes first rather the one in the newly created thread. Keep in mind the sequence of thread execution is not guaranteed.
Thread.Start Method
1) Thread.Start Method Causes the operating system to change the state of
the current instance to ThreadState.Running.
2) Once a thread is in the ThreadState.Running state, the operating
system can schedule it for execution. The thread begins executing at
the first line of the method represented by the ThreadStart
Edit It seems to me that representing this in graphical form will make this more clear and understandable. I tried to show the sequence of thread execution in diagram below.
You say:
"It is weird for me, because the t thread starts first then the main continues.".
This is not true. The "main" tread is already running. When t.Start(); is executed, the OS is told t is in the running state. The OS will then schedule execution time for the thread "soon". This is something else than the OS is instructed to stop execution of this thread until thread t is started. In other words, when Start returns, there is no guarantee that the thread has already started executing.
More of an advice than not an answer:
(Please note, that I see no real-life use for what you are trying to achieve, so I treat your problem as a thought experiment/proof of a concept not explained in detail.)
If you want your threads to "race" for control, don't give your main thread a head start! Creating a thread has some overhead and your main thread is already created (since it creates your other thread). If you are looking for a mostly equal chance for both of your main and worker thread, you should wait for your worker thread to be created in the main thread and wait for the main thread to start the race in your background thread. This can be achived by synch objects.
In practice it would look like this:
You should declare two ManualResetEvents which are visible for both your main- and background thread like this:
private static ManualResetEvent backgroundThreadReady = new ManualResetEvent(false);
private static ManualResetEvent startThreadRace = new ManualResetEvent(false);
Then in your main thread, you should wait for your thread being initialized like:
static void Main(string[] args)
{
Thread t = new Thread(WriteX);
t.Start();
backgroundThreadReady.WaitOne(); // wait for background thread to be ready
startThreadRace.Set(); // signal your background thread to start the race
for (int i = 0; i < 1000; i++)
{
Console.Write("O");
}
}
And in your thread:
private static void WriteX()
{
backgroundThreadReady.Set(); // inform your main thread that this thread is ready for the race
startThreadRace.WaitOne(); // wait 'till the main thread starts the race
for (int i = 0; i < 1000; i++)
{
Console.Write(".");
}
}
Please note that I could have used other waitable sync objects (mutex, autoreset event, even a critical section lock with some hack, I've just choose the simplest, fastest solution which can be extended easily).
Your code is non deterministic. Your code contains no thread primitives that would schedule priority of one thread over another or for one thread to wait for another.
Main process continue its next instructions set after invoking the thread ,It will take time to start thread method as light process.
It basically needs time to start the thread up. You are running the thread code at the same time as the rest of the first method. So taking into account the time it takes to start the thread and then get to the point where it is writing the "." does that make sense?
If you have a sort of reset button in your app to start everything again (without exiting) you may find that the first character is the "." because the thread will already exist.
There is only one reason why the main thread will finish before the created thread and that is because it takes time to start a thread. The only time you would use threads to speed up a program is when 2 tasks can be run at the exact same time. If you want to make the second loop finish first , take a look at Parallel.For loops in c#... these will run each loop in the for loop at the same time (not all of them but as much as your PC can handle)
I simplified the example below for the sake of clarity, but I came across this in a live production program and I cannot see how it would be working!
public class Test
{
static void Main()
{
Counter foo = new Counter();
ThreadStart job = new ThreadStart(foo.Count);
Thread thread = new Thread(job);
thread.Start();
Console.WriteLine("Main terminated");
}
}
public class Counter
{
public void Count()
{
for (int i = 0; i < 10; i++)
{
Console.WriteLine("Other thread: {0}", i);
Thread.Sleep(500);
}
Console.WriteLine("Counter terminated");
}
}
The main routine starts the counter thread and the main routine terminates. The counter thread carries on regardless giving the following output.
Main terminated
Other thread: 0
Other thread: 1
Other thread: 2
Other thread: 3
Other thread: 4
Other thread: 5
Other thread: 6
Other thread: 7
Other thread: 8
Other thread: 9
Counter terminated
My example program demonstrates that although the calling class no longer exists, the thread survives to completion. However, my understanding is that once a class is out of scope, its resources will eventually be tidied up by garbage collection.
In my real life scenario, the thread does a mass Emailing lasting 1-2 hours. My question is "Would garbage collection eventually kill off the thread or will GC know that the thread is still processing"? Would my Emailing thread always run to completion or is there a danger it will terminate abnormally?
From System.Threading.Thread
It is not necessary to retain a reference to a Thread object once you have started the thread. The thread continues to execute until the thread procedure is complete.
So even if the Thread object is unreferenced, the thread will still run.
Have a look at the documentation for System.Threading.Thread.IsBackground
If a thread isn't a background thread, it will keep the application from shutting down until it's done.
However, my understanding is that once a class is out of scope, its resources will eventually be tidied up by garbage collection.
This can be stated more precisely:
Once an object instance is no longer accessible from any executable code through a managed reference, it becomes eligible for garbage collection.
When you create a new thread that is executing a particular object's method you're making that object's contents accessible throughout that thread's lifetime. The GC can only clean it up if it's able to prove that it is no longer possible for any of the application's threads to ever access that object again. Since your code can still access the object instance, it doesn't get GCed.
Variable's scope is for the compiler to determine whether the variable is accessible by other methods. Thread is a running object that is controlled by the runtime.
I need to control one thread for my own purposes: calculating, waiting, reporting, etc...
In all other cases I'm using the ThreadPool or TaskEx.
In debugger, when I'm doing Thread.Sleep(), I notice that some parts of the UI are becoming less responsible. Though, without debugger seems to work fine.
The question is: If I'm creating new Thread and Sleep()'ing it, can it affect ThreadPool/Tasks?
EDIT: here are code samples:
One random place in my app:
ThreadPool.QueueUserWorkItem((state) =>
{
LoadImageSource(imageUri, imageSourceRef);
});
Another random place in my app:
var parsedResult = await TaskEx.Run(() => JsonConvert.DeserializeObject<PocoProductItem>(resultString, Constants.JsonSerializerSettings));
My ConcurrentQueue (modified, original is taken from here):
Creation of thread for Queue needs:
public void Process(T request, bool Async = true, bool isRecurssive = false)
{
if (processThread == null || !processThread.IsAlive)
{
processThread = new Thread(ProcessQueue);
processThread.Name = "Process thread # " + Environment.TickCount;
processThread.Start();
}
If one of the Tasks reports some networking problems, i want this thread to wait a bit
if (ProcessRequest(requestToProcess, true))
{
RequestQueue.Dequeue();
}
else
{
DoWhenTaskReturnedFalse();
Thread.Sleep(3000);
}
So, the question one more time: can Thread.Sleep(3000);, called from new Thread(ProcessQueue);, affect ThreadPool or TaskEx.Run() ?
Assuming that the thread you put on sleep was obtained from thread pool then surely it does affect the thread pool. If you explicitly say that the thread should sleep then it cannot be reused by the thread pool during this time. This may cause the thread pool to spawn new threads if there are some jobs awaiting to be scheduled. Creating a new thread is always expensive - threads are system resources.
You can however look at Task.Delay method (along with async and await) that suspends executing code in a more intelligent way - allowing the thread to be reused during waiting.
Refer to this Thread.Sleep vs. Task.Delay article.
Thread.Sleep() affects the thread it's called from, if you're calling Thread.Sleep() in a ThreadPool thread and trying to queue up more it may be hitting the max count of ThreadPool threads and waiting for a thread to finish before executing another.
http://msdn.microsoft.com/en-us/library/system.threading.threadpool.setmaxthreads.aspx
No, the Thread.Sleep() is only on the current thread. Thread.Sleep(int32) documentation:
The number of milliseconds for which the thread is suspended.
I have a problem in a production service which contains a "watchdog" timer used to check whether the main processing job has become frozen (this is related to a COM interop problem which unfortunately can't be reproduced in test).
Here's how it currently works:
During processing, the main thread resets a ManualResetEvent, processes a single item (this shouldn't take long), then sets the event. It then continues to process any remaining items.
Every 5 minutes, the watchdog calls WaitOne(TimeSpan.FromMinutes(5)) on this event. If the result is false, the service is restarted.
Sometimes, during normal operation, the service is being restarted by this watchdog even though processing takes nowhere near 5 minutes.
The cause appears to be that when multiple items await processing, the time between the Set() after the first item is processed, and the Reset() before the second item is processed is too brief, and WaitOne() doesn't appear to recognise that the event has been set.
My understanding of WaitOne() is that the blocked thread is guaranteed to receive a signal when Set() is called, but I assume I'm missing something important.
Note that if I allow a context switch by calling Thread.Sleep(0) after calling Set(), WaitOne() never fails.
Included below is a sample which produces the same behaviour as my production code. WaitOne() sometimes waits for 5 seconds and fails, even though Set() is being called every 800 milliseconds.
private static ManualResetEvent _handle;
private static void Main(string[] args)
{
_handle = new ManualResetEvent(true);
((Action) PeriodicWait).BeginInvoke(null, null);
((Action) PeriodicSignal).BeginInvoke(null, null);
Console.ReadLine();
}
private static void PeriodicWait()
{
Stopwatch stopwatch = new Stopwatch();
while (true)
{
stopwatch.Restart();
bool result = _handle.WaitOne(5000, false);
stopwatch.Stop();
Console.WriteLine("After WaitOne: {0}. Waited for {1}ms", result ? "success" : "failure",
stopwatch.ElapsedMilliseconds);
SpinWait.SpinUntil(() => false, 1000);
}
}
private static void PeriodicSignal()
{
while (true)
{
_handle.Reset();
Console.WriteLine("After Reset");
SpinWait.SpinUntil(() => false, 800);
_handle.Set();
// Uncommenting either of the lines below prevents the problem
//Console.WriteLine("After Set");
//Thread.Sleep(0);
}
}
The Question
While I understand that calling Set() closely followed by Reset() doesn't guarantee that all blocked threads will resume, is it also not guaranteed that any waiting threads will be released?
No, this is fundamentally broken code. There are only reasonable odds that the WaitOne() will complete when you keep the MRE set for such a short amount of time. Windows favors releasing a thread that's blocked on an event. But this will drastically fail when the thread isn't waiting. Or the scheduler picks another thread instead, one that runs with a higher priority and also got unblocked. Could be a kernel thread for example. MRE doesn't keep a "memory" of having been signaled and not yet waited on.
Neither Sleep(0) or Sleep(1) are good enough to guarantee that the wait is going to complete, there's no reasonable upper bound on how often the waiting thread could be bypassed by the scheduler. Although you probably ought to shut down the program when it takes longer than 10 seconds ;)
You'll need to do this differently. A simple way is to rely on the worker to eventually set the event. So reset it before you start waiting:
private static void PeriodicWait() {
Stopwatch stopwatch = new Stopwatch();
while (true) {
stopwatch.Restart();
_handle.Reset();
bool result = _handle.WaitOne(5000);
stopwatch.Stop();
Console.WriteLine("After WaitOne: {0}. Waited for {1}ms", result ? "success" : "failure",
stopwatch.ElapsedMilliseconds);
}
}
private static void PeriodicSignal() {
while (true) {
_handle.Set();
Thread.Sleep(800); // Simulate work
}
}
You can't "pulse" an OS event like this.
Among other issues, there's the fact that any OS thread performing a blocking wait on an OS handle can be temporarily interrupted by a kernel-mode APC; when the APC finishes, the thread resumes waiting. If the pulse happened during that interruption, the thread doesn't see it. This is just one example of how "pulses" can be missed (described in detail in Concurrent Programming on Windows, page 231).
BTW, this does mean that the PulseEvent Win32 API is completely broken.
In a .NET environment with managed threads, there's even more possibility of missing a pulse. Garbage collection, etc.
In your case, I would consider switching to an AutoResetEvent which is repeatedly Set by the working process and (automatically) reset by the watchdog process each time its Wait completes. And you'd probably want to "tame" the watchdog by only having it check every minute or so.