I've been trying to implement multi threading which looks something like this:
static void Main(string[] args)
{
List<Task> tskList = new List<Task>();
for (int i = 0; i < 100; i++)
{
Task taskTemp = new Task(() => { Display(i); });
taskTemp.Start();
tskList.Add(taskTemp);
//Thread.Sleep(10);
}
Task.WaitAll(tskList.ToArray());
}
public static void Display(int value)
{
Thread.Sleep(1000);
Console.WriteLine(value);
}
Without the Thread.Sleep(10) part, I get output printed as 100 times "100" instead of 0 to 99 which I'm getting with that sleep time of 10 ms.
My guess is that this could be happening because of the time required to schedule the thread by the system and by the time the thread is about to actually start, the value has reached 100.
If I put enough wait time (say 1000 ms instead of 10), will it be guaranteed to not have this problem? Or should I suspect that the system may take even more time to schedule the thread when CPU utilization is too much? What is the best way to solve this problem?
Thanks in advance for any inputs!
you should add a local variable to hold 'i', such as :
for (int i = 0; i < 100; i++)
{
var t = i;
Task taskTemp = new Task(() => { Display(t); });
taskTemp.Start();
tskList.Add(taskTemp);
//Thread.Sleep(10);
}
Just make a copy of "i" to "i1" and use it as local variable. "i" is always changed, thats why you get 100 100 100....:
private static void Main(string[] args)
{
var tskList = new List<Task>();
for (var i = 0; i < 100; i++)
{
var i1 = i;
var taskTemp = new Task(() => { Display(i1); });
taskTemp.Start();
tskList.Add(taskTemp);
}
Task.WaitAll(tskList.ToArray());
}
public static void Display(int value)
{
Thread.Sleep(1000);
Console.WriteLine(value);
}
Related
I have a windows service (written in C#) that use the task parallel library dll to perform some parallel tasks (5 tasks a time)
After the tasks are executed once I would like to repeat the same tasks on an on going basis (hourly). Call the QueuePeek method
Do I use a timer or a counter like I have setup in the code snippet below?
I am using a counter to set up the tasks, once I reach five I exit the loop, but I also use a .ContinueWith to decrement the counter, so my thought is that the counter value would be below 5 hence the loop would continue. But my ContinueWith seems to be executing on the main thread and the loop then exits.
The call to DecrementCounter using the ContinueWith does not seem to work
FYI : The Importer class is to load some libraries using MEF and do the work
This is my code sample:
private void QueuePeek()
{
var list = SetUpJobs();
while (taskCounter < 5)
{
int j = taskCounter;
Task task = null;
task = new Task(() =>
{
DoLoad(j);
});
taskCounter += 1;
tasks[j] = task;
task.ContinueWith((t) => DecrementTaskCounter());
task.Start();
ds.SetJobStatus(1);
}
if (taskCounter == 0)
Console.WriteLine("Completed all tasks.");
}
private void DoLoad(int i)
{
ILoader loader;
DataService.DataService ds = new DataService.DataService();
Dictionary<int, dynamic> results = ds.AssignRequest(i);
var data = results.Where(x => x.Key == 2).First();
int loaderId = (int)data.Value;
Importer imp = new Importer();
loader = imp.Run(GetLoaderType(loaderId));
LoaderProcessor lp = new LoaderProcessor(loader);
lp.ExecuteLoader();
}
private void DecrementTaskCounter()
{
Console.WriteLine(string.Format("Decrementing task counter with threadId: {0}",Thread.CurrentThread.ManagedThreadId) );
taskCounter--;
}
I see a few issues with your code that can potentially lead to some hard to track-down bugs. First, if using a counter that all of the tasks can potentially be reading and writing to at the same time, try using Interlocked. For example:
Interlocked.Increment(ref _taskCounter); // or Interlocked.Decrement(ref _taskCounter);
If I understand what you're trying to accomplish, I think what you want to do is to use a timer that you re-schedule after each group of tasks is finished.
public class Worker
{
private System.Threading.Timer _timer;
private int _timeUntilNextCall = 3600000;
public void Start()
{
_timer = new Timer(new TimerCallback(QueuePeek), null, 0, Timeout.Infinite);
}
private void QueuePeek(object state)
{
int numberOfTasks = 5;
Task[] tasks = new Task[numberOfTasks];
for(int i = 0; i < numberOfTasks; i++)
{
tasks[i] = new Task(() =>
{
DoLoad();
});
tasks[i].Start();
}
// When all tasks are complete, set to run this method again in x milliseconds
Task.Factory.ContinueWhenAll(tasks, (t) => { _timer.Change(_timeUntilNextCall, Timeout.Infinite); });
}
private void DoLoad() { }
}
I'm starting with the new .net 4.5 async programming and I found a situation like the code below: I have a sync method but I would like to make several calls and make it run in parallel. However, in this code, all the calls to the sync method runs with the id = 10 and I'm not sure why (probably I misunderstand something with this new approach :).
class Program
{
static void Main(string[] args)
{
var tasks = new List<Task>();
for (int i = 0; i < 10; i++)
{
var foo = new Foo();
var fooTask = Task.Run(() => foo.FooNonAsyncMethod(i));
tasks.Add(fooTask);
}
tasks.ForEach(t => t.Wait());
Console.WriteLine("All Done!");
Console.ReadLine();
}
}
public class Foo
{
public void FooNonAsyncMethod(int id)
{
Console.WriteLine("Starting {0}", id);
Thread.Sleep(4000);
Console.WriteLine("Ending {0}", id);
}
}
// Output:
// Starting 10
// Starting 10
// Starting 10
// Starting 10
// Ending 10
// Starting 10
// Starting 10
// Ending 10
// Ending 10
// ...
That's because there is only 1 variable i and the lambda expressions bind on a variable and not a value.
You can fix this by using:
for (int i = 0; i < 10; i++)
{
int newI = i;
var foo = new Foo();
var fooTask = Task.Run(() => foo.FooNonAsyncMethod(newI));
tasks.Add(fooTask);
}
As #Yuriy mentioned, this answer has a lot more info on this particularity : Is there a reason for C#'s reuse of the variable in a foreach?
Question: Why using a WriteOnceBlock (or BufferBlock) for getting back the answer (like sort of callback) from another BufferBlock<Action> (getting back the answer happens in that posted Action) causes a deadlock (in this code)?
I thought that methods in a class can be considered as messages that we are sending to the object (like the original point of view about OOP that was proposed by - I think - Alan Kay). So I wrote this generic Actor class that helps to convert and ordinary object to an Actor (Of-course there are lots of unseen loopholes here because of mutability and things, but that's not the main concern here).
So we have these definitions:
public class Actor<T>
{
private readonly T _processor;
private readonly BufferBlock<Action<T>> _messageBox = new BufferBlock<Action<T>>();
public Actor(T processor)
{
_processor = processor;
Run();
}
public event Action<T> Send
{
add { _messageBox.Post(value); }
remove { }
}
private async void Run()
{
while (true)
{
var action = await _messageBox.ReceiveAsync();
action(_processor);
}
}
}
public interface IIdGenerator
{
long Next();
}
Now; why this code works:
static void Main(string[] args)
{
var idGenerator1 = new IdInt64();
var idServer1 = new Actor<IIdGenerator>(idGenerator1);
const int n = 1000;
for (var i = 0; i < n; i++)
{
var t = new Task(() =>
{
var answer = new WriteOnceBlock<long>(null);
Action<IIdGenerator> action = x =>
{
var buffer = x.Next();
answer.Post(buffer);
};
idServer1.Send += action;
Trace.WriteLine(answer.Receive());
}, TaskCreationOptions.LongRunning); // Runs on a separate new thread
t.Start();
}
Console.WriteLine("press any key you like! :)");
Console.ReadKey();
Trace.Flush();
}
And this code does not work:
static void Main(string[] args)
{
var idGenerator1 = new IdInt64();
var idServer1 = new Actor<IIdGenerator>(idGenerator1);
const int n = 1000;
for (var i = 0; i < n; i++)
{
var t = new Task(() =>
{
var answer = new WriteOnceBlock<long>(null);
Action<IIdGenerator> action = x =>
{
var buffer = x.Next();
answer.Post(buffer);
};
idServer1.Send += action;
Trace.WriteLine(answer.Receive());
}, TaskCreationOptions.PreferFairness); // Runs and is managed by Task Scheduler
t.Start();
}
Console.WriteLine("press any key you like! :)");
Console.ReadKey();
Trace.Flush();
}
Different TaskCreationOptions used here to create Tasks. Maybe I am wrong about TPL Dataflow concepts here, just started to use it (A [ThreadStatic] hidden somewhere?).
The problematic issue with your code is this part: answer.Receive().
When you move it inside the action the deadlock doesn't happen:
var t = new Task(() =>
{
var answer = new WriteOnceBlock<long>(null);
Action<IIdGenerator> action = x =>
{
var buffer = x.Next();
answer.Post(buffer);
Trace.WriteLine(answer.Receive());
};
idServer1.Send += action;
});
t.Start();
So why is that? answer.Receive();, as opposed to await answer.ReceiveAsnyc(); blocks the thread until an answer is returned. When you use TaskCreationOptions.LongRunning each task gets its own thread, so there's no problem, but without it (the TaskCreationOptions.PreferFairness is irrelevant) all the thread pool threads are busy waiting and so everything is much slower. It doesn't actually deadlock, as you can see when you use 15 instead of 1000.
There are other solutions that help understand the problem:
Increasing the thread pool with ThreadPool.SetMinThreads(1000, 0); before the original code.
Using ReceiveAsnyc:
Task.Run(async () =>
{
var answer = new WriteOnceBlock<long>(null);
Action<IIdGenerator> action = x =>
{
var buffer = x.Next();
answer.Post(buffer);
};
idServer1.Send += action;
Trace.WriteLine(await answer.ReceiveAsync());
});
I want to create a multithreaded application code. I want to execute configured no of threads and each thread do the work. I want to know is this the write approach or do we have better approach. All the threads needs to be executed asynchronously.
public static bool keepThreadsAlive = false;
static void Main(string[] args)
{
Program pgm = new Program();
int noOfThreads = 4;
keepThreadsAlive = true;
for (int i = 1; i <= noOfThreads; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork), (object)i);
}
System.Console.ReadLine();
StopAllThreads();
System.Console.ReadLine();
}
private static void DoWork(object threadNumber)
{
int threadNum = (int)threadNumber;
int counter = 1;
while (keepThreadsAlive)
{
counter = ProcessACK(threadNum, counter);
}
}
private static int ProcessACK(int threadNum, int counter)
{
System.Console.WriteLine("Thread {0} count {1}", threadNum, counter++);
Random ran = new Random();
int randomNumber = ran.Next(5000, 100000);
for (int i = 0; i < randomNumber; i++) ;
Thread.Sleep(2000);
return counter;
}
As others have pointed out, the methods you are using are dated and not as elegant as the more modern C# approach to accomplishing the same tasks.
Have a look at System.Threading.Tasks for an overview of what is available to you these days. There is even a way to set the maximum threads used in a parallel operation. Here is a simple (pseudocode) example:
Parallel.ForEach(someListOfItems, new ParallelOptions { MaxDegreeOfParallelism = 8 }, item =>
{
//do stuff for each item in "someListOfItems" using a maximum of 8 threads.
});
Hope this helps.
Example for threading queue book "Accelerated C# 2008" (CrudeThreadPool class) not work correctly. If I insert long job in WorkFunction() on 2-processor machine executing for next task don't run before first is over. How to solve this problem? I want to load the processor to 100 percent
public class CrudeThreadPool
{
static readonly int MAX_WORK_THREADS = 4;
static readonly int WAIT_TIMEOUT = 2000;
public delegate void WorkDelegate();
public CrudeThreadPool()
{
stop = 0;
workLock = new Object();
workQueue = new Queue();
threads = new Thread[MAX_WORK_THREADS];
for (int i = 0; i < MAX_WORK_THREADS; ++i)
{
threads[i] = new Thread(new ThreadStart(this.ThreadFunc));
threads[i].Start();
}
}
private void ThreadFunc()
{
lock (workLock)
{
int shouldStop = 0;
do
{
shouldStop = Interlocked.Exchange(ref stop, stop);
if (shouldStop == 0)
{
WorkDelegate workItem = null;
if (Monitor.Wait(workLock, WAIT_TIMEOUT))
{
// Process the item on the front of the queue
lock (workQueue)
{
workItem = (WorkDelegate)workQueue.Dequeue();
}
workItem();
}
}
} while (shouldStop == 0);
}
}
public void SubmitWorkItem(WorkDelegate item)
{
lock (workLock)
{
lock (workQueue)
{
workQueue.Enqueue(item);
}
Monitor.Pulse(workLock);
}
}
public void Shutdown()
{
Interlocked.Exchange(ref stop, 1);
}
private Queue workQueue;
private Object workLock;
private Thread[] threads;
private int stop;
}
public class EntryPoint
{
static void WorkFunction()
{
Console.WriteLine("WorkFunction() called on Thread 0}", Thread.CurrentThread.GetHashCode());
//some long job
double s = 0;
for (int i = 0; i < 100000000; i++)
s += Math.Sin(i);
}
static void Main()
{
CrudeThreadPool pool = new CrudeThreadPool();
for (int i = 0; i < 10; ++i)
{
pool.SubmitWorkItem(
new CrudeThreadPool.WorkDelegate(EntryPoint.WorkFunction));
}
pool.Shutdown();
}
}
I can see 2 problems:
Inside ThreadFunc() you take a lock(workLock) for the duration of the method, meaning your threadpool is no longer async.
in the Main() method, you close down the threadpool w/o waiting for it to finish. Oddly enough that is why it is working now, stopping each ThreadFunc after 1 loop.
It's hard to tell because there's no indentation, but it looks to me like it's executing the work item while still holding workLock - which is basically going to serialize all the work.
If at all possible, I suggest you start using the Parallel Extensions framework in .NET 4, which has obviously had rather more time spent on it. Otherwise, there's the existing thread pool in the framework, and there are other implementations around if you're willing to have a look. I have one in MiscUtil although I haven't looked at the code for quite a while - it's pretty primitive.