Why does thread priority have no effect? - c#

namespace HelloWorld
{
class Program
{
static void Main(string[] args)
{
Thread t = new Thread(() => WriteY("11"));
t.Start();
t.IsBackground = true;
Thread.CurrentThread.Priority = ThreadPriority.Highest;
for (int i = 0; i < 1000; i++) Console.Write("x");
System.Console.ReadLine();
}
static void WriteY(string ss)
{
for (int i = 0; i < 1000; i++)
Console.Write(ss);
System.Console.ReadLine();
}
}
}
Hello, I think the "x" thread should complete first, since its priority is heighest. but the result is that they are still switching.

First of all, regardless of priorities, you can not make any assumptions about which parallel job finishes first.
Also, please read this article by Jeff Atwood about why using thread priorities is a bad idea.
http://www.codinghorror.com/blog/2006/08/thread-priorities-are-evil.html
In case of tl;dr just one quote: No matter how brilliant a programmer you may be, I can practically guarantee you won't be able to outsmart the programmers who wrote the scheduler in your operating system.

If you have more then one CPU(-core) on your PC which is available this is "normal", as Thread 1 is running on CPU/core 1 and thread 2 on CPU/core 2.
I think if you reporduce the same with more threads then available cores or running e.g. Prime95 in the background to use 100% of your CPU it should make a difference, but in case of <= threads then (available) CPUs/corse the scheduler will run each thread on its own core and (if they are doing ~ the same) they will finish the same time...

Related

Are there more tasks being performed than threads? What's happening?

I wrote a simple code. The machine has 32 threads. At the twentieth second, I see the number 54 in the console. This means that 54 tasks have started. Each task uses thread suspension. I don't understand why tasks continue to run if tasks have already been created and started in all possible threads and the thread suspension code is running in each task.
What's going on, how does it work?
void MyMethod(int i)
{
Console.WriteLine(i);
Thread.Sleep(int.MaxValue);
}
Console.WriteLine(Environment.ProcessorCount);
for (int i = 0; i < int.MaxValue; i++)
{
Thread.Sleep(50);
int j = i;
Task.Run(() => MyMethod(j));
}
And why does this code create so many tasks? (Environment.ProcessorCount => 32)
using System.Net;
void MyMethod(int i)
{
Console.WriteLine(WebRequest.Create("https://192.168.1.1").GetResponse().ContentLength);
}
for (int i = 0; i < Environment.ProcessorCount; i++)
{
int j = i;
Task.Run(() => MyMethod(j));
}
Thread.Sleep(int.MaxValue);
The Task.Run method runs the code on the ThreadPool, and the ThreadPool creates more threads when it becomes saturated. Initially it creates immediately on demand as many threads as the number of cores. After that point it is said to be saturated, and creates one new thread every second until the demand is satisfied. This rate is not documented. It is found by experimentation on .NET 6, and might change in future .NET versions.
You are able to control the saturation threshold with the ThreadPool.SetMinThreads method. For example ThreadPool.SetMinThreads(100, 100). If you give it too large values, this method does nothing and returns false.

Why Thread.Start method is blocked when CPU load is high?

For test purposes I write CPU stress program: it just do N for-loops in M threads.
I run this program with large number of threads, say 200.
But in Task Manager I see that threads counter not exceed some little value, say 9 and a Thread.Start methods waits for finish previous running threads.
This behavior seems like a ThreadPool behavior, but I expect that regular System.Threading.Thread must start anyway without waiting for some reason.
Code below will reproduce this issue and have an option for workaround:
using System;
using System.Diagnostics;
using System.Threading;
namespace HeavyLoad
{
class Program
{
static long s_loopsPerThread;
static ManualResetEvent s_startFlag;
static void Main(string[] args)
{
long totalLoops = (long)5e10;
int threadsCount = 200;
s_loopsPerThread = totalLoops / threadsCount;
Thread[] threads = new Thread[threadsCount];
var watch = Stopwatch.StartNew();
for (int i = 0; i < threadsCount; i++)
{
Thread t = new Thread(IntensiveWork);
t.IsBackground = true;
threads[i] = t;
}
watch.Stop();
Console.WriteLine("Creating took {0} ms", watch.ElapsedMilliseconds);
// *** Comment out s_startFlag creation to change the behavior ***
// s_startFlag = new ManualResetEvent(false);
watch = Stopwatch.StartNew();
foreach (var thread in threads)
{
thread.Start();
}
watch.Stop();
Console.WriteLine("Starting took {0} ms", watch.ElapsedMilliseconds);
if (s_startFlag != null)
s_startFlag.Set();
watch = Stopwatch.StartNew();
foreach (var thread in threads)
{
thread.Join();
}
watch.Stop();
Console.WriteLine("Waiting took {0} ms", watch.ElapsedMilliseconds);
Console.ReadLine();
}
private static void IntensiveWork()
{
if (s_startFlag != null)
s_startFlag.WaitOne();
for (long i = 0; i < s_loopsPerThread; i++)
{
// hot point
}
}
}
}
Case 1: If s_startFlag creation is commented, then starting threads immediately begins a high intensive CPU work. In this case I have a small concurrency (around 9 threads) and all the time I hold on thread starting code:
Creating took 0 ms
Starting took 4891 ms
Waiting took 63 ms
Case 2: But if I create s_startFlag, all new threads will wait until it will be set. In this case I successfully start all 200 threads concurrently and get expected values: little time for a start and much time for a work and number of threads in Task Manager is 200+:
Creating took 0 ms
Starting took 27 ms
Waiting took 4733 ms
Why threads refuse start in first case? What kind of limitation I exceed?
System:
OS: Windows 7 Professional
Framework: NET 4.6
CPU: Intel Core2 Quad Q9550 # 2.83GHz
RAM: 8 Gb
I do some research and now I see that high CPU load really have strong affect to the thread starting timings.
First: I set totalLoops to 100 times bigger value for have more time of observation. I saw that threads not limited but very slowly created. 1 thread start in 1-2 seconds!
Second: I explicitly bind a main thread to CPU core #0 and working threads to cores #1, #2, #3 using of SetThreadAffinityMask function (https://sites.google.com/site/dotburger/threading/setthreadaffinitymask-1).
Stopwatch watch;
using (ProcessorAffinity.BeginAffinity(0))
{
watch = Stopwatch.StartNew();
for (int i = 0; i < threadsCount; i++)
{
Thread t = new Thread(IntensiveWork);
t.IsBackground = true;
threads[i] = t;
}
watch.Stop();
Console.WriteLine("Creating took {0} ms", watch.ElapsedMilliseconds);
}
and
using (ProcessorAffinity.BeginAffinity(1, 2, 3))
{
for (long i = 0; i < s_loopsPerThread; i++)
{
}
}
Now main thread has own dedicated CPU core (in the process boundaries) and worker threads starting after ~10 milliseconds (totalLoops = 5e10).
Creating took 0 ms
Starting took 2282 ms
Waiting took 3681 ms
Additionally, I found this sentence in MSDN:
When you call the Thread.Start method on a thread, that thread might
or might not start executing immediately, depending on the number of
processors and the number of threads currently waiting to execute.
https://msdn.microsoft.com/en-us/library/1c9txz50(v=vs.110).aspx
Conclusion: Thread.Start method a very sensitive to number of actively working threads. It could be a very strong performance impact - slowing in hundreds times.

How can i use AutoResetEventHandler to signal Main thread function to start threads again once the first set of worker threads are done processing

Requirement :- At any given point of time only 4 threads should be calling four different functions. As soon as these threads complete, next available thread should call the same functions.
Current code :- This seems to be the worst possible way to achieve something like this. While(True) will cause unnecessary CPU spikes and i could see CPU rising to 70% when running the following code.
Question :- How can i use AutoResetEventHandler to signal Main thread Process() function to start next 4 threads again once the first 4 worker threads are done processing without wasting CPU cycles. Please suggest
public class Demo
{
object protect = new object();
private int counter;
public void Process()
{
int maxthread = 4;
while (true)
{
if (counter <= maxthread)
{
counter++;
Thread t = new Thread(new ThreadStart(DoSomething));
t.Start();
}
}
}
private void DoSomething()
{
try
{
Thread.Sleep(50000); //simulate long running process
}
finally
{
lock (protect)
{
counter--;
}
}
}
You can use TPL to achieve what you want in a simpler way. If you run the code below you'll notice that an entry is written after each thread terminates and only after all four threads terminate the "Finished batch" entry is written.
This sample uses the Task.WaitAll to wait for the completion of all tasks. The code uses an infinite loop for illustration purposes only, you should calculate the hasPendingWork condition based on your requirements so that you only start a new batch of tasks if required.
For example:
private static void Main(string[] args)
{
bool hasPendingWork = true;
do
{
var tasks = InitiateTasks();
Task.WaitAll(tasks);
Console.WriteLine("Finished batch...");
} while (hasPendingWork);
}
private static Task[] InitiateTasks()
{
var tasks = new Task[4];
for (int i = 0; i < tasks.Length; i++)
{
int wait = 1000*i;
tasks[i] = Task.Factory.StartNew(() =>
{
Thread.Sleep(wait);
Console.WriteLine("Finished waiting: {0}", wait);
});
}
return tasks;
}
One other thing, from the textual requirement section on your question I'm lead to believe that a batch of four new threads should only start after all previously four threads completed. However the code you posted is not compatible with that requirement, since it starts a new thread immediately after a previous thread terminate. You should clarify what exactly is your requirement.
UPDATE:
If you want to start a thread immediately after one of the four threads terminate you can still use TPL instead of starting new threads explicitly but you can limit the number of running threads to four by using a SemaphoreSlim. For example:
private static SemaphoreSlim TaskController = new SemaphoreSlim(4);
private static void Main(string[] args)
{
var random = new Random(570);
while (true)
{
// Blocks thread without wasting CPU
// if the number of resources (4) is exhausted
TaskController.Wait();
Task.Factory.StartNew(() =>
{
Console.WriteLine("Started");
Thread.Sleep(random.Next(1000, 3000));
Console.WriteLine("Completed");
// Releases a resource meaning TaskController.Wait will unblock
TaskController.Release();
});
}
}

How do you get list of running threads in C#?

I create dynamic threads in C# and I need to get the status of those running threads.
List<string>[] list;
list = dbConnect.Select();
for (int i = 0; i < list[0].Count; i++)
{
Thread th = new Thread(() =>{
sendMessage(list[0]['1']);
//calling callback function
});
th.Name = "SID"+i;
th.Start();
}
for (int i = 0; i < list[0].Count; i++)
{
// here how can i get list of running thread here.
}
How can you get list of running threads?
On Threads
I would avoid explicitly creating threads on your own.
It is much more preferable to use the ThreadPool.QueueUserWorkItem or if you do can use .Net 4.0 you get the much more powerful Task parallel library which also allows you to use a ThreadPool threads in a much more powerful way (Task.Factory.StartNew is worth a look)
What if we choose to go by the approach of explicitly creating threads?
Let's suppose that your list[0].Count returns 1000 items. Let's also assume that you are performing this on a high-end (at the time of this writing) 16core machine. The immediate effect is that we have 1000 threads competing for these limited resources (the 16 cores).
The larger the number of tasks and the longer each of them runs, the more time will be spent in context switching. In addition, creating threads is expensive, this overhead creating each thread explicitly could be avoided if an approach of reusing existing threads is used.
So while the initial intent of multithreading may be to increase speed, as we can see it can have quite the opposite effect.
How do we overcome 'over'-threading?
This is where the ThreadPool comes into play.
A thread pool is a collection of threads that can be used to perform a number of tasks in the background.
How do they work:
Once a thread in the pool completes its task, it is returned to a queue of waiting threads, where it can be reused. This reuse enables applications to avoid the cost of creating a new thread for each task.
Thread pools typically have a maximum number of threads. If all the threads are busy, additional tasks are placed in queue until they can be serviced as threads become available.
So we can see that by using a thread pool threads we are more efficient both
in terms of maximizing the actual work getting done. Since we are not over saturating the processors with threads, less time is spent switching between threads and more time actually executing the code that a thread is supposed to do.
Faster thread startup: Each threadpool thread is readily available as opposed to waiting until a new thread gets constructed.
in terms of minimising memory consumption, the threadpool will limit the number of threads to the threadpool size enqueuing any requests that are beyond the threadpool size limit. (see ThreadPool.GetMaxThreads). The primary reason behind this design choice, is of course so that we don't over-saturate the limited number of cores with too many thread requests keeping context switching to lower levels.
Too much Theory, let's put all this theory to the test!
Right, it's nice to know all this in theory, but let's put it to practice and see what
the numbers tell us, with a simplified crude version of the application that can give us a coarse indication of the difference in orders of magnitude. We will do a comparison between new Thread, ThreadPool and Task Parallel Library (TPL)
new Thread
static void Main(string[] args)
{
int itemCount = 1000;
Stopwatch stopwatch = new Stopwatch();
long initialMemoryFootPrint = GC.GetTotalMemory(true);
stopwatch.Start();
for (int i = 0; i < itemCount; i++)
{
int iCopy = i; // You should not use 'i' directly in the thread start as it creates a closure over a changing value which is not thread safe. You should create a copy that will be used for that specific variable.
Thread thread = new Thread(() =>
{
// lets simulate something that takes a while
int k = 0;
while (true)
{
if (k++ > 100000)
break;
}
if ((iCopy + 1) % 200 == 0) // By the way, what does your sendMessage(list[0]['1']); mean? what is this '1'? if it is i you are not thread safe.
Console.WriteLine(iCopy + " - Time elapsed: (ms)" + stopwatch.ElapsedMilliseconds);
});
thread.Name = "SID" + iCopy; // you can also use i here.
thread.Start();
}
Console.ReadKey();
Console.WriteLine(GC.GetTotalMemory(false) - initialMemoryFootPrint);
Console.ReadKey();
}
Result:
ThreadPool.EnqueueUserWorkItem
static void Main(string[] args)
{
int itemCount = 1000;
Stopwatch stopwatch = new Stopwatch();
long initialMemoryFootPrint = GC.GetTotalMemory(true);
stopwatch.Start();
for (int i = 0; i < itemCount; i++)
{
int iCopy = i; // You should not use 'i' directly in the thread start as it creates a closure over a changing value which is not thread safe. You should create a copy that will be used for that specific variable.
ThreadPool.QueueUserWorkItem((w) =>
{
// lets simulate something that takes a while
int k = 0;
while (true)
{
if (k++ > 100000)
break;
}
if ((iCopy + 1) % 200 == 0)
Console.WriteLine(iCopy + " - Time elapsed: (ms)" + stopwatch.ElapsedMilliseconds);
});
}
Console.ReadKey();
Console.WriteLine("Memory usage: " + (GC.GetTotalMemory(false) - initialMemoryFootPrint));
Console.ReadKey();
}
Result:
Task Parallel Library (TPL)
static void Main(string[] args)
{
int itemCount = 1000;
Stopwatch stopwatch = new Stopwatch();
long initialMemoryFootPrint = GC.GetTotalMemory(true);
stopwatch.Start();
for (int i = 0; i < itemCount; i++)
{
int iCopy = i; // You should not use 'i' directly in the thread start as it creates a closure over a changing value which is not thread safe. You should create a copy that will be used for that specific variable.
Task.Factory.StartNew(() =>
{
// lets simulate something that takes a while
int k = 0;
while (true)
{
if (k++ > 100000)
break;
}
if ((iCopy + 1) % 200 == 0) // By the way, what does your sendMessage(list[0]['1']); mean? what is this '1'? if it is i you are not thread safe.
Console.WriteLine(iCopy + " - Time elapsed: (ms)" + stopwatch.ElapsedMilliseconds);
});
}
Console.ReadKey();
Console.WriteLine("Memory usage: " + (GC.GetTotalMemory(false) - initialMemoryFootPrint));
Console.ReadKey();
}
Result:
So we can see that:
+--------+------------+------------+--------+
| | new Thread | ThreadPool | TPL |
+--------+------------+------------+--------+
| Time | 6749 | 228ms | 222ms |
| Memory | ≈300kb | ≈103kb | ≈123kb |
+--------+------------+------------+--------+
The above falls nicely inline to what we anticipated in theory. High memory for new Thread as well as slower overall performance when compared to ThreadPool. ThreadPool and TPL have equivalent performance with TPL having a slightly higher memory footprint than a pure thread pool but it's probably a price worth paying given the added flexibility Tasks provide (such as cancellation, waiting for completion querying status of task)
At this point, we have proven that using ThreadPool threads is the preferable option in terms of speed and memory.
Still, we have not answered your question. How to track the state of the threads running.
To answer your question
Given the insights we have gathered, this is how I would approach it:
List<string>[] list = listdbConnect.Select()
int itemCount = list[0].Count;
Task[] tasks = new Task[itemCount];
stopwatch.Start();
for (int i = 0; i < itemCount; i++)
{
tasks[i] = Task.Factory.StartNew(() =>
{
// NOTE: Do not use i in here as it is not thread safe to do so!
sendMessage(list[0]['1']);
//calling callback function
});
}
// if required you can wait for all tasks to complete
Task.WaitAll(tasks);
// or for any task you can check its state with properties such as:
tasks[1].IsCanceled
tasks[1].IsCompleted
tasks[1].IsFaulted
tasks[1].Status
As a final note, you can not use the variable i in your Thread.Start, since it would create a closure over a changing variable which would effectively be shared amongst all Threads. To get around this (assuming you need to access i), simply create a copy of the variable and pass the copy in, this would make one closure per thread which would make it thread safe.
Good luck!
Use Process.Threads:
var currentProcess = Process.GetCurrentProcess();
var threads = currentProcess.Threads;
Note: any threads owned by the current process will show up here, including those not explicitly created by you.
If you only want the threads that you created, well, why don't you just keep track of them when you create them?
Create a List<Thread> and store each new thread in your first for loop in it.
List<string>[] list;
List<Thread> threads = new List<Thread>();
list = dbConnect.Select();
for (int i = 0; i < list[0].Count; i++)
{
Thread th = new Thread(() =>{
sendMessage(list[0]['1']);
//calling callback function
});
th.Name = "SID"+i;
th.Start();
threads.add(th)
}
for (int i = 0; i < list[0].Count; i++)
{
threads[i].DoStuff()
}
However if you don't need i you can make the second loop a foreach instead of a for
As a side note, if your sendMessage function does not take very long to execute you should somthing lighter weight then a full Thread, use a ThreadPool.QueueUserWorkItem or if it is available to you, a Task
Process.GetCurrentProcess().Threads
This gives you a list of all threads running in the current process, but beware that there are threads other than those you started yourself.
Use Process.Threads to iterate through your threads.

Multithreading: When would I use a Join?

I see online that it says I use myThread.Join(); when I want to block my thread until another thread finishes. (One of the things I don't get about this is what if I have multiple threads).
But generally, I just don't get when I'd use .Join() or a condition that it's useful for. Can anyone please explain this to me like I'm a fourth grader? Very simple explanation to understand will get my answer vote.
Let's say you want to start some worker threads to perform some kind of calculation, and then do something afterwards with all the results.
List<Thread> workerThreads = new List<Thread>();
List<int> results = new List<int>();
for (int i = 0; i < 5; i++) {
Thread thread = new Thread(() => {
Thread.Sleep(new Random().Next(1000, 5000));
lock (results) {
results.Add(new Random().Next(1, 10));
}
});
workerThreads.Add(thread);
thread.Start();
}
// Wait for all the threads to finish so that the results list is populated.
// If a thread is already finished when Join is called, Join will return immediately.
foreach (Thread thread in workerThreads) {
thread.Join();
}
Debug.WriteLine("Sum of results: " + results.Sum());
Oh yeah, and don't use Random like that, I was just trying to write a minimal, easily understandable example. It ends up not really being random if you create new Random instances too close in time, since the seed is based on the clock.
In the following code snippet, the main thread calls Join() which causes it to wait for all spawned threads to finish:
static void Main()
{
Thread regularThread = new Thread(ThreadMethod);
regularThread.Start();
Thread regularThread2 = new Thread(ThreadMethod2);
regularThread2.Start();
// Wait for spawned threads to end.
regularThread.Join();
Console.WriteLine("regularThread returned.");
regularThread2.Join();
Console.WriteLine("regularThread2 returned.");
}
Note that if you also spun up a thread from the thread pool (using QueueUserWorkItem for instance), Join would not wait for that background thread. You would need to implement some other mechanism such as using an AutoResetEvent.
For an excellent introduction to threading, I recommend reading Joe Albahari's free Threading in C#
This is very simple program to demonstrate usage of Thread Join.Please follow my comments for better understanding.Write this program as it is.
using System;
using System.Threading;
namespace ThreadSample
{
class Program
{
static Thread thread1, thread2;
static int sum=0;
static void Main(string[] args)
{
start();
Console.ReadKey();
}
private static void Sample() { sum = sum + 1; }
private static void Sample2() { sum = sum + 10; }
private static void start()
{
thread1 = new Thread(new ThreadStart(Sample));
thread2 = new Thread(new ThreadStart(Sample2));
thread1.Start();
thread2.Start();
// thread1.Join();
// thread2.Join();
Console.WriteLine(sum);
Console.WriteLine();
}
}
}
1.First time run as it is (with comments) : Then result will be 0(initial value) or 1(when thread 1 finished) or 10 (Or thread finished)
2.Run with removing comment thread1.Join() : Result should be always more than 1.because thread1.Join() fired and thread 1 should be finished before get the sum.
3.Run with removing all coments :Result should be always 11
Join is used mainly when you need to wait that a thread (or a bunch of them) will terminate before proceding with your code.
For this reason is also particular useful when you need to collect result from a thread execution.
As per the Arafangion comment below, it's also important to join threads if you need to do some cleaning/housekeeping code after having created a thread.
Join will make sure that the treads above line is executed before executing lines below.
Another example, when your worker thread let's say reads from an input stream while the read method can run forever and you want to somehow avoid this - by applying timeout using another watchdog thread:
// worker thread
var worker = new Thread(() => {
Trace.WriteLine("Reading from stream");
// here is the critical area of thread, where the real stuff happens
// Sleep is just an example, simulating any real operation
Thread.Sleep(10000);
Trace.WriteLine("Reading finished");
}) { Name = "Worker" };
Trace.WriteLine("Starting worker thread...");
worker.Start();
// watchdog thread
ThreadPool.QueueUserWorkItem((o) => {
var timeOut = 5000;
if (!worker.Join(timeOut))
{
Trace.WriteLine("Killing worker thread after " + timeOut + " milliseconds!");
worker.Abort();
}
});
Adding a delay of 300ms in method "Sample" and a delay of 400ms in "Sample2" from devopsEMK's post would make it easier to understand.
By doing so you can observe that by removing the comment from "thread1.Join();" line, the main thread waits for the "thread1" to complete and only after moves on.

Categories

Resources