In the following C# code, t1 always(for the times I tried) finishes.
class MainClass
{
static void DoExperiment ()
{
int value = 0;
Thread t1 = new Thread (() => {
Console.WriteLine ("T one is running now");
while (value == 0) {
//do nothing
}
Console.WriteLine ("T one is done now");
});
Thread t2 = new Thread (() => {
Console.WriteLine ("T two is running now");
Thread.Sleep (1000);
value = 1;
Console.WriteLine ("T two changed value to 1");
Console.WriteLine ("T two is done now");
});
t1.Start ();
t2.Start ();
t1.Join ();
t1.Join ();
}
public static void Main (string[] args)
{
for (int i=0; i<10; i++) {
DoExperiment ();
Console.WriteLine ("------------------------");
}
}
}
But in the Java code which is very similar, t1 never(for the times I tried) exits:
public class MainClass {
static class Experiment {
private int value = 0;
public void doExperiment() throws InterruptedException {
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("T one is running now");
while (value == 0) {
//do nothing
}
System.out.println("T one is done now");
}
});
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("T two is running now");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
value = 1;
System.out.println("T two changed value to 1");
System.out.println("T two is done now");
}
}
);
t1.start();
t2.start();
t1.join();
t1.join();
}
}
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 10; i++) {
new Experiment().doExperiment();
System.out.println("------------------------");
}
}
}
Why is that?
I'm not sure how it happens in C#, but what happens in Java is JVM optimization. The value of value does not change inside the while loop and the JVM recognises it and just skip the test and change your bite code to something like this:
while (true) {
// do nothing
}
In order to fix this in java you need to declare value as volatile:
private volatile int value = 0;
This will make the JVM to not optimise this while loop and check the for the actual value of value at the start of each iteration.
There are a couple of things here.
Firstly, when you do:
t1.Start ();
t2.Start ();
You're asking the operating system to schedule the threads for runnings. It's possible that t2 will start first. In fact it may even finish before t1 is ever scheduled to run.
However, there is a memory model issue here. Your threads may run on different cores. It's possible that value is sitting in the CPU cache of each core, or is stored in a register on each core, and when you read/write to value you are writing to the cache value. There's no requirement for the language runtime to flush the writes to value back to main memory, and there's no requirement for it to read the value back from main memory each time.
If you want to access a shared variable then it's your responsibility to tell the runtime that the variable is shared, and that it must read/write from main memory and/or flush the CPU cache. This is typically done with lock, Interlocked or synchronized constructs in C# and Java. If you surround access to value with a lock (in C#) or synchronized (in Java) then you should see consistent results.
The reason things behave differently without locking is that each language defines a memory model, and these models are different. Without going into the specifics, C# on x86 writes back to main memory more than the Java memory model does. This is why you're seeing different outcomes.
Edit: For more information on the C# side of things take a look at Chapter 4 of Threading in C# by Joseph Albahari.
Related
Is the following code thread-safe?
public object DemoObject {get;set;}
public void DemoMethod()
{
if (DemoObject is IDemoInterface demo)
{
demo.DoSomething();
}
}
If other threads modify DemoObject (e.g. set to null) while DemoMethod is being processed, is it guaranteed that within the if block the local variable demo will always be assigned correctly (to an instance of type IDemoInterface)?
The is construct here is atomic much like interlocked. However the behavior of this code is almost 100% non deterministic. Unless the objective is to create unpredictable and non deterministic behavior this would be a bug.
Valid usage example of this code: In a game to simulate the possibility of some non deterministic event such as "Neo from the Matrix catching a bullet in mid air", this method may be more non deterministic that simply using a pseudo random number generator.
In any scenario where deterministic / predictable behavior is expected this code is a bug.
Explanation:
if (DemoObject is IDemoInterface demo)
is evaluated and assigned pseudo atomically.
Thereafter within the if statement:
even if DemoObject is set to null by another thread the value of demo has already been assigned and the DoSomething() operation is executed on the already assigned instance.
To answer your comment questions:
why is there a race?
The race condition is by design in this code. In the example code below:
16 threads are competing to set the value of DemoObject to null
while another 16 threads are competing to set the value of DemoObject to an instance of DemoClass.
At the same time 16 threads are competing to execute DoSomething() whenever they win the race condition when DemoObject is NOT null.
See: What is a race condition?
and why can i not predict whether DoSomething() will execute?
DoSomething() will execute each time
if (DemoObject is IDemoInterface demo)
evaluates to true. Each time DemoObject is null or NOT IDemoInterface it will NOT execute.
You cannot predict when it will execute. You can only predict that it will execute whenever the thread executing DoSomething() manages to get a reference to a non null instance of DemoObject. Or in other words when a thread running DemoMethod() manages to win the race condition:
A) after a thread running DemoMethod_Assign() wins the race condition
B) and before a thread running DemoMethod_Null() wins the race condition
Caveat: As per my understanding (Someone else please clarify this point) DemoObject may be both null and not null at the same time across different threads.
DemoObject may be read from cache or may be read from main memory. We cannot make it volatile since it is an object reference. Therefore the state of DemoObject may be simultaneously Null for one thread and not null for another thread. Meaning its value is non deterministic. In Schrödinger's cat, the cat is both dead and alive simultaneously. We have much the same situation here.
There are no locks or memory barriers in this code with respect to DemoObject. However a thread context switch forces the equivalent of a memory barrier. Therefore any thread resuming after a context switch will have an accurate copy of the value of DemoObject as retrieved from main memory. However a different thread may have altered the value of DemoObject but this altered value may not have been flushed to main memory yet. Which then brings into question which is the actual accurate value? The value fetched from main memory or the value not yet flushed to main memory.
Note: Someone else please clarify this Caveat as I may have missed something.
Here is some code to validate everything above except the Caveat. Ran this console app test on a machine with 64 logical cores. Null reference exception is never thrown.
internal class Program
{
private static ManualResetEvent BenchWaitHandle = new ManualResetEvent(false);
private class DemoClass : IDemoInterface
{
public void DoSomething()
{
Interlocked.Increment(ref Program.DidSomethingCount);
}
}
private interface IDemoInterface
{
void DoSomething();
}
private static object DemoObject { get; set; }
public static volatile int DidSomethingCount = 0;
private static void DemoMethod()
{
BenchWaitHandle.WaitOne();
for (int i = 0; i < 100000000; i++)
{
try
{
if (DemoObject is IDemoInterface demo)
{
demo.DoSomething();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
}
private static bool m_IsRunning = false;
private static object RunningLock = new object();
private static bool IsRunning
{
get { lock (RunningLock) { return m_IsRunning; } }
set { lock(RunningLock) { m_IsRunning = value; } }
}
private static void DemoMethod_Assign()
{
BenchWaitHandle.WaitOne();
while (IsRunning)
{
DemoObject = new DemoClass();
}
}
private static void DemoMethod_Null()
{
BenchWaitHandle.WaitOne();
while (IsRunning)
{
DemoObject = null;
}
}
static void Main(string[] args)
{
List<Thread> threadsListDoWork = new List<Thread>();
List<Thread> threadsList = new List<Thread>();
BenchWaitHandle.Reset();
for (int I =0; I < 16; I++)
{
threadsListDoWork.Add(new Thread(new ThreadStart(DemoMethod)));
threadsList.Add(new Thread(new ThreadStart(DemoMethod_Assign)));
threadsList.Add(new Thread(new ThreadStart(DemoMethod_Null)));
}
foreach (Thread t in threadsListDoWork)
{
t.Start();
}
foreach (Thread t in threadsList)
{
t.Start();
}
IsRunning = true;
BenchWaitHandle.Set();
foreach (Thread t in threadsListDoWork)
{
t.Join();
}
IsRunning = false;
foreach (Thread t in threadsList)
{
t.Join();
}
Console.WriteLine(#"Did Something {0} times", DidSomethingCount);
Console.ReadLine();
}
//On the last run this printed
//Did Something 112780926 times
//Which means that DemoMethod() threads won the race condition slightly over 7% of the time.
If this were C++, I'd use a pointer. I'm struggling with the C# syntax and new enough to the language that I don't know how to solve this.
This example has a member variables in the class "Program" named "ProgramDat" that I want to access from a thread in another class. In addition, I want the thread to be able to change that memory location and have Program::Main() see the update as well as Program::Main() change the variable and have the thread see it.
ref doesn't seem to do it. Is there another construct I could use?
(And I recognize the potential race conditions here. I left out any synchronization techniques for simplicity of this post.)
Thanks.
using System;
using System.Threading;
public class Program
{
static ManualResetEvent ExitEvent;
static MyThread t;
static int ProgramDat;
public static void Main()
{
ExitEvent = new ManualResetEvent(false);
ProgramDat = 12; // initialize to some value
t = new MyThread(ExitEvent, ref ProgramDat);
Thread.Sleep(1500); // let MyThread run a bit
// Main() doesn't see the changes MyThread::RunLoop() made
Console.WriteLine("Main just before setting to 500, MyFormDat={0}", ProgramDat);
ProgramDat = 500; // and this change is not seen in MyThread::RunLoop();
Console.WriteLine("Main just set MyFormDat={0}", ProgramDat);
Thread.Sleep(2000);
Console.WriteLine("Just prior to telling thread to exit, MyFormDat={0}", ProgramDat);
ExitEvent.Set();
Thread.Sleep(500); // wait to let MyThread::RunLoop() finish
Console.WriteLine("main leaving. MyFormDat={0}", ProgramDat);
}
}
public class MyThread
{
ManualResetEvent e;
Thread t;
int MyThreadDat;
public MyThread(ManualResetEvent ev, ref int FromProgramDat)
{
e = ev;
MyThreadDat = FromProgramDat;
t = new Thread(RunLoop);
t.Start();
}
public void RunLoop()
{
while (true)
{
if (e.WaitOne(300) == true)
{
Console.WriteLine("RunLoop leaving!");
return;
}
else
{
Console.WriteLine("tic. iFormDat={0}", MyThreadDat);
MyThreadDat++; // change it each loop but I can't get Main() to see this change
}
}
}
}
Most types in .NET are reference types that have reference semantics just like pointers in C/C++ (the main exceptions are value types: enums and structs).
But in this case just make ProgramDat internal and it can be accessed by other types as Program.ProgramDat as long as the other type is in the same assembly.
If you have significant state to share (and unless it is immutable that's an anti-pattern) wrap it in a reference type, instantiate in Main and pass the reference into the thread.
NB. the ref modify changes parameters from pass by value to pass by reference; this allows the called function to change the value of a variable in the called.
void PassByRef(ref int x) {
x = 42;
}
void PassByValue(int x) {
// This has no effect
x = 84;
}
var Caller() {
x = 1;
Console.WriteLine(x); // Writes 1
PassByRef(ref x);
Console.WriteLine(x); // Writes 42
PassByValue(x);
Console.WriteLine(x); // Writes 42: no change by PassByValue
}
I'm trying to simulate (very basic & simple) OS process manager subsystem, I have three "processes" (workers) writing something to console (this is an example):
public class Message
{
public Message() { }
public void Show()
{
while (true)
{
Console.WriteLine("Something");
Thread.Sleep(100);
}
}
}
Each worker is supposed to be run on a different thread. That's how I do it now:
I have a Process class which constructor takes Action delegate and starts a thread from it and suspends it.
public class Process
{
Thread thrd;
Action act;
public Process(Action act)
{
this.act = act;
thrd = new Thread(new ThreadStart(this.act));
thrd.Start();
thrd.Suspend();
}
public void Suspend()
{
thrd.Suspend();
}
public void Resume()
{
thrd.Resume();
}
}
In that state it waits before my scheduler resumes it, gives it a time slice to run, then suspends it again.
public void Scheduler()
{
while (true)
{
//ProcessQueue is just FIFO queue for processes
//MainQueue is FIFO queue for ProcessQueue's
ProcessQueue currentQueue = mainQueue.Dequeue();
int count = currentQueue.Count;
if (currentQueue.Count > 0)
{
while (count > 0)
{
Process currentProcess = currentQueue.GetNext();
currentProcess.Resume();
//this is the time slice given to the process
Thread.Sleep(1000);
currentProcess.Suspend();
Console.WriteLine();
currentQueue.Add(currentProcess);
count--;
}
}
mainQueue.Enqueue(currentQueue);
}
}
The problem is that it doesn't work consistently. It even doesn't work at all in this state, i have to add Thread.Sleep() before WriteLine in Show() method of the worker, like this.
public void Show()
{
while (true)
{
Thread.Sleep(100); //Without this line code doesn't work
Console.WriteLine("Something");
Thread.Sleep(100);
}
}
I've been trying to use ManualResetEvent instead of suspend/resume, it works, but since that event is shared, all threads relying on it wake up simultaneously, while I need only one specific thread to be active at one time.
If some could help me figure out how to pause/resume task/thread normally, that'd be great.
What I'm doing is trying to simulate simple preemptive multitasking.
Thanks.
Thread.Suspend is evil. It is about as evil as Thread.Abort. Almost no code is safe in the presence of being paused at arbitrary, unpredictable locations. It might hold a lock that causes other threads to pause as well. You quickly run into deadlocks or unpredictable stalls in other parts of the system.
Imagine you were accidentally pausing the static constructor of string. Now all code that wants to use a string is halted as well. Regex internally uses a locked cache. If you pause while this lock is taken all Regex related code might pause. These are just two egregious examples.
Probably, suspending some code deep inside the Console class is having unintended consequences.
I'm not sure what to recommend to you. This seems to be an academic exercise so thankfully this is not a production problem for you. User-mode waiting and cancellation must be cooperative in practice.
I manage to solve this problem using static class with array of ManualResetEvent's, where each process is identified by it's unique ID. But I think it's pretty dirty way to do it. I'm open to other ways of accomplishing this.
UPD: added locks to guarantee thread safety
public sealed class ControlEvent
{
private static ManualResetEvent[] control = new ManualResetEvent[100];
private static readonly object _locker = new object();
private ControlEvent() { }
public static object Locker
{
get
{
return _locker;
}
}
public static void Set(int PID)
{
control[PID].Set();
}
public static void Reset(int PID)
{
control[PID].Reset();
}
public static ManualResetEvent Init(int PID)
{
control[PID] = new ManualResetEvent(false);
return control[PID];
}
}
In worker class
public class RandomNumber
{
static Random R = new Random();
ManualResetEvent evt;
public ManualResetEvent Event
{
get
{
return evt;
}
set
{
evt = value;
}
}
public void Show()
{
while (true)
{
evt.WaitOne();
lock (ControlEvent.Locker)
{
Console.WriteLine("Random number: " + R.Next(1000));
}
Thread.Sleep(100);
}
}
}
At Process creation event
RandomNumber R = new RandomNumber();
Process proc = new Process(new Action(R.Show));
R.Event = ControlEvent.Init(proc.PID);
And, finally, in scheduler
public void Scheduler()
{
while (true)
{
ProcessQueue currentQueue = mainQueue.Dequeue();
int count = currentQueue.Count;
if (currentQueue.Count > 0)
{
while (count > 0)
{
Process currentProcess = currentQueue.GetNext();
//this wakes the thread
ControlEvent.Set(currentProcess.PID);
Thread.Sleep(quant);
//this makes it wait again
ControlEvent.Reset(currentProcess.PID);
currentQueue.Add(currentProcess);
count--;
}
}
mainQueue.Enqueue(currentQueue);
}
}
The single best advice I can give with regard to Suspend() and Resume(): Don't use it. You are doing it wrong™.
Whenever you feel a temptation to use Suspend() and Resume() pairs to control your threads, you should step back immediately and ask yourself, what you are doing here. I understand, that programmers tend to think of the execution of code paths as of something that must be controlled, like some dumb zombie worker that needs permament command and control. That's probably a function of the stuff learned about computers in school and university: Computers do only what you tell them.
Ladies & Gentlemen, here's the bad news: If you are doing it that way, this is called "micro management", and some even would call it "control freak thinking".
Instead, I would strongly encorage you to think about it in a different way. Try to think of your threads as intelligent entities, that do no harm and the only thing they want is to be fed with enough work. They just need a little guidance, that's all. You may place a container full of work just in front of them (work task queue) and have them pulling the tasks from that container themselves, as soon as the finished their previous task. When the container is empty, all tasks are processed and there's nothing left to do, they are allowed to fall asleep and WaitFor(alarm) which will be signaled whenever new tasks arrive.
So instead of command-and-controlling a herd of dumb zombie slaves that can't do anything right without you cracking the whip behind them, you deliberately guide a team of intelligent co-workers and just let it happen. That's the way a scalable architecture is built. You don't have to be a control freak, just have a little faith in your own code.
Of course, as always, there are exceptions to that rule. But there aren't that many, and I would recommend to start with the work hypothesis, that your code is probably the rule, rather than the exception.
The form I'm trying to develop has an array of 6 picture boxes and an array of 6 die images. I have a button that when clicked needs to create 6 threads that "roll" the dice, showing each image for a moment. The problem I'm having is that I need to call a method within button click after the dice have been rolled. I can get the dice to roll but the message box is displayed immediately. I've tried a few different ways and get various errors. In the non working version below, the program freezes. I've checked out a ton of resources but I'm just not grasping some concepts like Delegates and Invoke all that well.
Any help would be great! Here's my program
namespace testDice
{
public partial class Form1 : Form
{
private Image[] imgAr;
private PictureBox[] picBoxAr;
private Random r;
private Thread[] tArray;
private ThreadStart tStart;
private delegate void setTheImages();
public Form1()
{
InitializeComponent();
setImageArray();
setPicBoxAr();
}
private void setImageArray()
{
imgAr = new Image[6];
imgAr[0] = testDice.Properties.Resources.die6;
imgAr[1] = testDice.Properties.Resources.die1;
imgAr[2] = testDice.Properties.Resources.die2;
imgAr[3] = testDice.Properties.Resources.die3;
imgAr[4] = testDice.Properties.Resources.die4;
imgAr[5] = testDice.Properties.Resources.die5;
}
private void setPicBoxAr()
{
picBoxAr = new PictureBox[6];
picBoxAr[0] = pictureBox1;
picBoxAr[1] = pictureBox2;
picBoxAr[2] = pictureBox3;
picBoxAr[3] = pictureBox4;
picBoxAr[4] = pictureBox5;
picBoxAr[5] = pictureBox6;
}
private void button1_Click(object sender, EventArgs e)
{
roll();
//wait for threads to finish and update images--doesn't work
for (int n = 0; n < 6; n++)
{
while (tArray[n].IsAlive)
{
for (int i = 0; i < 6; i++)
{
this.picBoxAr[i].Update();
}
}
}
MessageBox.Show("Each die has its own thread");
}
private void roll()
{
this.tStart = new ThreadStart(RunAllDiceThreads);
this.tArray = new Thread[6];
for (int i = 0; i < 6; i++)
{
this.tArray[i] = new Thread(tStart);
this.tArray[i].Start();
}
}
private void RunAllDiceThreads()
{
int n = 0;
while (n < 50)
{
setImg();
Thread.Sleep(50);
n++;
}
for (int i = 0; i < 6; i++)
{
if (tArray[i] != null)
{
tArray[i].Abort();
tArray[i] = null;
}
}
}// end RunAllDiceThreads
private void setImg()
{
r = new Random();
for (int i = 0; i < 6; i++)
{
if (this.picBoxAr[i].InvokeRequired)
{
setTheImages s = new setTheImages(setImg);
// parameter mismatch error here
//this.Invoke(s, new object[] { imgAr[r.Next(6)] });
//Freezes here!!
this.Invoke(s);
}
else
{
this.picBoxAr[i].Image = imgAr[r.Next(6)];
}
}
}//end setImg
}// end class Form1
}//end namespace testDice
Sounds like you're getting a deadlock between your invocation of setting the images and your update of the picture boxes.
I'd recommend rethinking your program a bit. Your program almost seems to be built on the concept that you're modeling an individual die with an individual thread. Break up the state of the die from the state of the thread. For example, you might want to create a Die class which has a certain state to it, such as IsRolling, or CurrentValue. Use and modify objects of that class (and that class only) inside your loops in your worker threads. That way, you won't have to invoke back to your UI thread to update. The dependencies are a lot cleaner that way. You might want to create a Timer in your UI thread which periodically fires (say 10-30 times a second), reads the state of each of the dice, and updates the images that way. That's a lot safer in terms of deadlocks because you don't have any cyclic dependencies. It'll also likely produce a more attractive interface because your die images will update in a smoother, more predictable fashion.
Another rule of thumb... Don't call Thread.Abort() (see references). It's generally a lot safer to use a property of a Die object and simply read from that to update your UI.
You need to remove MessageBox.Show("Each die has its own thread"); from button1_Click.
Create a property to track how many threads have returned. When it hits 6 invoke MessageBox.Show("Each die has its own thread"); (you will probably want to put this call in its own method and invoke that method).
Your problem is that you are starting the threads, then while they are running showing the message box rather then waiting for the threads to return.
If you're able to work against the latest version of the .Net Framework, I would recommend making use of the System.Threading.Tasks namespace. The nice thing is that it encapsulates a lot of the multithreading details and makes things much cleaner. Here's a simple example.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace TasksExample
{
class Program
{
static void Main(string[] args)
{
// holds all the tasks you're trying to run
List<Task> waitingTasks = new List<Task>();
// a simple object to lock on
object padlock = new object();
// simple shared value that each task can access
int sharedValue = 1;
// add each new task to the list above. The best way to create a task is to use the Task.Factory.StartNew() method.
// you can also use Task.Factory<RETURNVALUE>.StartNew() method to return a value from the task
waitingTasks.Add(Task.Factory.StartNew(() =>
{
// this makes sure that we don't enter a race condition when trying to access the
// shared value
lock (padlock)
{
// note how we don't need to explicitly pass the sharedValue to the task, it's automatically available
Console.WriteLine("I am thread 1 and the shared value is {0}.", sharedValue++);
}
}));
waitingTasks.Add(Task.Factory.StartNew(() =>
{
lock (padlock)
{
Console.WriteLine("I am thread 2 and the shared value is {0}.", sharedValue++);
}
}));
waitingTasks.Add(Task.Factory.StartNew(() =>
{
lock (padlock)
{
Console.WriteLine("I am thread 3 and the shared value is {0}.", sharedValue++);
}
}));
waitingTasks.Add(Task.Factory.StartNew(() =>
{
lock (padlock)
{
Console.WriteLine("I am thread 4 and the shared value is {0}.", sharedValue++);
}
}));
waitingTasks.Add(Task.Factory.StartNew(() =>
{
lock (padlock)
{
Console.WriteLine("I am thread 5 and the shared value is {0}.", sharedValue++);
}
}));
waitingTasks.Add(Task.Factory.StartNew(() =>
{
lock (padlock)
{
Console.WriteLine("I am thread 6 and the shared value is {0}.", sharedValue++);
}
}));
// once you've spun up all the tasks, pass an array of the tasks to Task.WaitAll, and it will
// block until all tasks are complete
Task.WaitAll(waitingTasks.ToArray());
Console.WriteLine("Hit any key to continue...");
Console.ReadKey(true);
}
}
}
I hope this helps, and let me know if you need any more help.
I have a timer calling a function every 15 minutes, this function counts the amount of lines in my DGV and starts a thread for each lines (of yet another function), said thread parse a web page which can take anywhere from 1 second to 10 second to finish.
Whilst it does work fine as it is with 1-6 rows, anymore will cause the requests to time-out.
I want it to wait for the newly created thread to finish processing before getting back in the loop to create another thread without locking the main UI
for (int x = 0; x <= dataGridFollow.Rows.Count - 1; x++)
{
string getID = dataGridFollow.Rows[x].Cells["ID"].Value.ToString();
int ID = int.Parse(getID);
Thread t = new Thread(new ParameterizedThreadStart(UpdateLo));
t.Start(ID);
// <- Wait for thread to finish here before getting back in the for loop
}
I have googled a lot in the past 24 hours, read a lot about this specific issue and its implementations (Thread.Join, ThreadPools, Queuing, and even SmartThreadPool).
It's likely that I've read the correct answer somewhere but I'm not at ease enough with C# to decypher those Threading tools
Thanks for your time
to avoid the UI freeze the framework provide a class expressly for these purposes: have a look at the BackgroundWorker class (executes an operation on a separate thread), here's some infos : http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx
http://msdn.microsoft.com/en-us/magazine/cc300429.aspx
Btw looks if I understand correctly you don't want to parallelize any operation so just wait for the method parsing the page to be completed. Basically for each (foreach look) row of your grid you get the id and call the method. If you want to go parallel just reuse the same foreach loop and add make it Parallel
http://msdn.microsoft.com/en-us/library/dd460720.aspx
What you want is to set off a few workers that do some task.
When one finishes you can start a new one off.
I'm sure there is a better way using thread pools or whatever.. but I was bored so i came up with this.
using System;
using System.Collections.Generic;
using System.Linq;
using System.ComponentModel;
using System.Threading;
namespace WorkerTest
{
class Program
{
static void Main(string[] args)
{
WorkerGroup workerGroup = new WorkerGroup();
Console.WriteLine("Starting...");
for (int i = 0; i < 100; i++)
{
var work = new Action(() =>
{
Thread.Sleep(1000); //somework
});
workerGroup.AddWork(work);
}
while (workerGroup.WorkCount > 0)
{
Console.WriteLine(workerGroup.WorkCount);
Thread.Sleep(1000);
}
Console.WriteLine("Fin");
Console.ReadLine();
}
}
public class WorkerGroup
{
private List<Worker> workers;
private Queue<Action> workToDo;
private object Lock = new object();
public int WorkCount { get { return workToDo.Count; } }
public WorkerGroup()
{
workers = new List<Worker>();
workers.Add(new Worker());
workers.Add(new Worker());
foreach (var w in workers)
{
w.WorkCompleted += (OnWorkCompleted);
}
workToDo = new Queue<Action>();
}
private void OnWorkCompleted(object sender, EventArgs e)
{
FindWork();
}
public void AddWork(Action work)
{
workToDo.Enqueue(work);
FindWork();
}
private void FindWork()
{
lock (Lock)
{
if (workToDo.Count > 0)
{
var availableWorker = workers.FirstOrDefault(x => !x.IsBusy);
if (availableWorker != null)
{
var work = workToDo.Dequeue();
availableWorker.StartWork(work);
}
}
}
}
}
public class Worker
{
private BackgroundWorker worker;
private Action work;
public bool IsBusy { get { return worker.IsBusy; } }
public event EventHandler WorkCompleted;
public Worker()
{
worker = new BackgroundWorker();
worker.DoWork += new DoWorkEventHandler(OnWorkerDoWork);
worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(OnWorkerRunWorkerCompleted);
}
private void OnWorkerRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
if (WorkCompleted != null)
{
WorkCompleted(this, EventArgs.Empty);
}
}
public void StartWork(Action work)
{
if (!IsBusy)
{
this.work = work;
worker.RunWorkerAsync();
}
else
{
throw new InvalidOperationException("Worker is busy");
}
}
private void OnWorkerDoWork(object sender, DoWorkEventArgs e)
{
work.Invoke();
work = null;
}
}
}
This would be just a starting point.
You could start it off with a list of Actions and then have a completed event for when that group of actions is finished.
then at least you can use a ManualResetEvent to wait for the completed event.. or whatever logic you want really.
Call a method directly or do a while loop (with sleep calls) to check the status of the thread.
There are also async events but the would call another method, and you want to continue from the same point.
I have no idea why the requests would timeout. That sounds like a different issue. However, I can make a few suggestions regarding your current approach.
Avoid creating threads in loops with nondeterministic bounds. There is a lot of overhead in creating threads. If the number of operations is not known before hand then use the ThreadPool or the Task Parallel Library instead.
You are not going to get the behavior you want by blocking the UI thread with Thread.Join. The cause the UI to become unresponsive and it will effectively serialize the operations and cancel out any advantage you were hoping to gain with threads.
If you really want to limit the number of concurrent operations then a better solution is to create a separate dedicated thread for kicking off the operations. This thread will spin around a loop indefinitely waiting for items to appear in a queue and when they do it will dequeue them and use that information to kick off an operation asynchronously (again using the ThreadPool or TPL). The dequeueing thread can contain the logic for limiting the number of concurrent operations. Search for information regarding the producer-consumer pattern to get a better understand of how you can implement this.
There is a bit of a learning curve, but who said threading was easy right?
If I understand correctly, what you're currently doing is looping through a list of IDs in the UI thread, starting a new thread to handle each one. The blocking issue you're seeing then could well be that it's taking too many resources to create unique threads. So, personally (without knowing more) would redesign the process like so:
//Somewhere in the UI Thread
Thread worker = new Thread(new ParameterizedThreadStart(UpdateLoWorker));
worker.Start(dataGridFollow.Rows);
//worker thread
private void UpdateLoWorker(DataRowCollection rows)
{
foreach(DataRow r in rows){
string getID = r.Cells["ID"].Value.ToString();
int ID = int.Parse(getID);
UpdateLo(ID);
}
}
Here you'd have a single non-blocking worker which sequentially handles each ID.
Consider using Asynchronous CTP. It's an asynch pattern Microsoft recently released for download. It should simplify asynch programming tremendouesly. The link is http://msdn.microsoft.com/en-us/vstudio/async.aspx. (Read the whitepaper first)
Your code would look something like the following. (I've not verified my syntax yet, sorry).
private async Task DoTheWork()
{
for(int x = 0; x <= dataGridFollow.Rows.Count - 1; x++)
{
string getID = dataGridFollow.Rows[x].Cells["ID"].Value.ToString();
int ID = int.Parse(getID);
task t = new Task(new Action<object>(UpdateLo), ID);
t.Start();
await t;
}
}
This method returns a Task that can be checked periodically for completion. This follows the pattern of "fire and forget" meaning you just call it and presumably, you don't care when it completes (as long as it does complete before 15 minutes).
EDIT
I corrected the syntax above, you would need to change UpdateLo to take an object instead of an Int.
For a simple background thread runner that will run one thread from a queue at a time you can do something like this:
private List<Thread> mThreads = new List<Thread>();
public static void Main()
{
Thread t = new Thread(ThreadMonitor);
t.IsBackground = true;
t.Start();
}
private static void ThreadMonitor()
{
while (true)
{
foreach (Thread t in mThreads.ToArray())
{
// Runs one thread in the queue and waits for it to finish
t.Start();
mThreads.Remove(t);
t.Join();
}
Thread.Sleep(2000); // Wait before checking for new threads
}
}
// Called from the UI or elsewhere to create any number of new threads to run
public static void DoStuff()
{
Thread t = new Thread(DoCorestuff);
t.IsBackground = true;
mActiveThreads.Add(t);
}
public static void DoStuffCore()
{
// Your code here
}