I am using the BackgroundWorker to do some heavy stuff in the background so that the UI does not become unresponsive.
But today I noticed that when I run my program, only one of the two CPUs is being used.
Is there any way to use all CPUs with the BackgroundWorker?
Here is my simplified code, just if you are curious!
private System.ComponentModel.BackgroundWorker bwPatchApplier;
this.bwPatchApplier.WorkerReportsProgress = true;
this.bwPatchApplier.DoWork += new System.ComponentModel.DoWorkEventHandler(this.bwPatchApplier_DoWork);
this.bwPatchApplier.ProgressChanged += new System.ComponentModel.ProgressChangedEventHandler(this.bwPatchApplier_ProgressChanged);
this.bwPatchApplier.RunWorkerCompleted += new System.ComponentModel.RunWorkerCompletedEventHandler(this.bwPatchApplier_RunWorkerCompleted);
private void bwPatchApplier_DoWork(object sender, DoWorkEventArgs e)
{
string pc1WorkflowName;
string pc2WorkflowName;
if (!GetWorkflowSettings(out pc1WorkflowName, out pc2WorkflowName)) return;
int progressPercentage = 0;
var weWorkspaces = (List<WEWorkspace>) e.Argument;
foreach (WEWorkspace weWorkspace in weWorkspaces)
{
using (var spSite = new SPSite(weWorkspace.SiteId))
{
foreach (SPWeb web in spSite.AllWebs)
{
using (SPWeb spWeb = spSite.OpenWeb(web.ID))
{
PrintHeader(spWeb.ID, spWeb.Title, spWeb.Url, bwPatchApplier);
try
{
for (int index = 0; index < spWeb.Lists.Count; index++)
{
SPList spList = spWeb.Lists[index];
if (spList.Hidden) continue;
string listName = spList.Title;
if (listName.Equals("PC1") || listName.Equals("PC2"))
{
#region STEP 1
// STEP 1: Remove Workflow
#endregion
#region STEP 2
// STEP 2: Add Events: Adding & Updating
#endregion
}
if ((uint) spList.BaseTemplate == 10135 || (uint) spList.BaseTemplate == 10134)
{
#region STEP 3
// STEP 3: Configure Custom AssignedToEmail Property
#endregion
#region STEP 4
if (enableAssignToEmail)
{
// STEP 4: Install AssignedTo events to Work lists
}
#endregion
}
#region STEP 5
// STEP 5 Install Notification Events
#endregion
#region STEP 6
// STEP 6 Install Report List Events
#endregion
progressPercentage += TotalSteps;
UpdatePercentage(progressPercentage, bwPatchApplier);
}
}
catch (Exception exception)
{
progressPercentage += TotalSteps;
UpdatePercentage(progressPercentage, bwPatchApplier);
}
}
}
}
}
PrintMessage(string.Empty, bwPatchApplier);
PrintMessage("*** Process Completed", bwPatchApplier);
UpdateStatus("Process Completed", bwPatchApplier);
}
Thanks a lot for looking into this :)
The BackgroundWorker does its work within a single background (ThreadPool) thread. As such, if it's computationally heavy, it'll use one CPU heavily. The UI thread is still running on the second, but is probably (like most user interface work) spending almost all of its time idle waiting for input (which is a good thing).
If you want to split your work up to use more than one CPU, you'll need to use some other techniques. This could be multiple BackgroundWorker components, each doing some work, or using the ThreadPool directly. Parallel programming has been simplified in .NET 4 via the TPL, which is likely a very good option. For details, you can see my series on the TPL or MSDN's page on the Task Parallel Library.
Each BackgroundWorker uses only a single thread to do the stuff you tell it to do. To take advantage of multiple cores, you would need multiple threads. That would mean either multiple BackgroundWorkers or spawning multiple threads from within your DoWork method.
The backgroundworker, by itself, only provides one additional thread of execution. It's purpose is to get things off the UI thread, and it's very good at that job. If you want more threads, you need to provide them yourself.
It would be tempting here to build a method that accepts an SPWeb argument, and just call Thread.Start() over and over for each object; then finish with Thread.Join() or WaitAll() to wait for them to finish at the end of the BackgroundWorker. However, this would be a bad idea because you'll lose efficiency as the operating system spends time performing context switches among all the threads.
Instead, you want to force your system to run in only a few threads, but at least two (in this case). A good rule of thumb is (2n - 1), where "n" is the number of processor cores you have... but there are all kinds of cases where you want to break this rule. You can implement this by using a ThreadPool, by iterating over your SPWeb objects and adding them to a queue that you keep pulling from, or other means such as the TPL.
The BackgroundWorker is running a new thread on the second CPU core, leaving the UI responsive.
If you're using .NET 4, look into using the Task Parallel Library, which could give you better results and utilize both cores.
The BackgroundWorker itself is only creating a single thread apart from your main UI to do work in - it's not trying to parallelize the operations within that work thread. If you want to spread your work across multiple work threads you should look into using the TPL. Bear in mind that not all tasks translate well to parallel execution, so if freeing the UI is your only goal this may already be the best you can do.
There are potential pitfalls to this, but you might get some mileage out of utilizing Parallel.ForEach:
Instead of
foreach (SPWeb web in spSite.AllWebs)
{
//Your loop code here
}
You could:
Parallel.Foreach(spSite.AllWebs, web =>
{
//Your loop code here
});
This basically creates a Task (from the Task API in .NET 4.0) from each item and schedules that work with the TaskPool, which will give you some of the parallelism you will need to take advantage of those cores.
You will have to fix the inevitable concurrency problems that might arise from this, but it's a good starting point. You are going to at least fix the fact that you are maintaining a shared state across threads (the progress counter). Here's some guidance on that: http://msdn.microsoft.com/en-us/library/dd997392.aspx
Related
This is my 1st time using threading in an C# Application. Basically its an application which checks a bunch of web sites in a list whether its dead or alive.
Here is my 1st attempt to work with multi-threading
public void StartThread(string URL,int no)
{
Thread newThread = new Thread(() =>
{
BeginInvoke(new Action(() => richTextBox1.Text += "Thread " + no + " Running" + Environment.NewLine));
bool b = ping(URL);
if (b == true) { BeginInvoke(new Action(() => richTextBox2.Text += "Okay" + Environment.NewLine)); }
else
{ return; }
});
newThread.Start();
}
I'm using the above function to create new threads and each thread is created inside a loop.
foreach (string site in website) {
StartThread(site,i);
i++; // Counter }
Since i'm a beginner i have few questions.
The code works fine but i'm not sure if this the best solution
Sometimes threads run perfectly but sometimes Threads does not return any values from the method ping() which checks the host and returns true if its online using WebRequest. is it usual ?
If i ask the user to specify a no of threads that he needs to use , how can i equally distribute work among those threads ?
Is their an elegant way that i track the status of the thread, ( dead / alive ) ? i currently use rocess.GetCurrentProcess().Threads.Count;
spinning up a new thread for each request is inefficient ... you probably will want to have a fixed number of worker threads (so one can run on each core of the cpu)
have a look at the ConcurrentQueue<T> class, which will give you a thread safe first-in-first-out queue where you can enqueue your requests and let each worker thread dequeue a request, process it, and so on until the queue is empty ...
be aware that you may not call controls on your GUI from other threads than the main GUI thread ... have a look at the ISynchronizeInvoke Interface which can help you decide if an cross thread situation needs to be handled by invoking another thread
1) Your solution is OK. The Thread class has been partially superseded by the Task class, if you're writing new code, you can use that. There is also something completely new in .NET 4.5, called await .However, see 4).
2) Your ping method might simply be crashing if the website is dead. You can show us the code of that method.
4)Thread class is nice because you can easily check the thread state, as per your requirements, using the ThreadState property - just create a List<Thread> , put your threads in it, and then start them one by one.
3)If you want to load the number of threads from input and distribute the work evenly, put the tasks in a queue (you can use the ConcurrentQueue that has already been suggested) and have the threads load the URLs from the queue. Sample code:
you initialize everything
void initialize(){
ConcurrentQueue<string> queue = new ConcurrentQueue<string>();
foreach(string url in websites)
{
queue.Enqueue(url);
}
//and the threads
List<Thread> threads = new List<Thread>();
for (int i = 0; i < threadCountFromTheUser; i++)
{
threads.Add(new Thread(work));
}}
//work method
void work()
{
while (!queue.IsEmpty)
{
string url;
bool fetchedUrl = queue.TryDequeue(out url);
if (fetchedUrl)
ping(url);
}
}
and then run
foreach (Thread t in threads)
{
t.Start();
}
Code not tested
You should consider the .Net ThreadPool. However, it's generally unsuitable for threads that take more than about second to execute.
See also:
Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4
I have a function where I want to execute in a separate thread avoiding two threads to access the same resources. Also I want to make sure that if the thread is currently executing then stop that thread and start executing the new thread. This is what I have:
volatile int threadCount = 0; // use it to know the number of threads being executed
private void DoWork(string text, Action OncallbackDone)
{
threadCount++;
var t = new Thread(new ThreadStart(() =>
{
lock (_lock) // make sure that this code is only accessed by one thread
{
if (threadCount > 1) // if a new thread got in here return and let the last one execute
{
threadCount--;
return;
}
// do some work in here
Thread.Sleep(1000);
OncallbackDone();
threadCount--;
}
}));
t.Start();
}
if I fire that method 5 times then all the threads will be waiting for the lock until the lock is released. I want to make sure that I execute the last thread though. when the threads are waiting to be the owner of the lock how can I determine which will be the next one owning the lock. I want them to own the resource in the order that I created the threads...
EDIT
I am not creating this application with .net 4.0 . Sorry for not mentioning what I was trying to accomplish. I am creating an autocomplete control where I am filtering a lot of data. I don't want the main window to freeze eveytime I want to filter results. also I want to filter results as the user types. If the user types 5 letters at once I want to stop all threads and I will just be interested in the last one. because the lock blocks all the threads sometimes the last thread that I created may own the lock first.
I think you are overcomplicating this. If you are able to use 4.0, then just use the Task Parallel Library. With it, you can just set up a ContinueWith function so that threads that must happen in a certain order are done in the order you dictate. If this is NOT what you are looking for, then I actually would suggest that you not use threading, as this sounds like a synchronous action that you are trying to force into parallelism.
If you are just looking to cancel tasks: then here is a SO question on how to cancel TPL tasks. Why waste the resources if you are just going to dump them all except for the last one.
If you are not using 4.0, then you can accomplish the same thing with a Background Worker. It just takes more boilerplate code to accomplish the same thing :)
I agree with Justin in that you should use the .NET 4 Task Parallel Library. But if you want complete control you should not use the default Task Scheduler, which favors LIFO, but create your own Task Scheduler (http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx) and implement the logic that you want to determine which task gets preference.
Using Threads directly is not recommended unless you have deep knowledge of .NET Threading. If you are on .NET 4.0; Tasks and TPL are preferred.
This is what I came up with after reading the links that you guys posted. I guess I needed a Queue therefore I implemented:
volatile int threadCount = 0;
private void GetPredicateAsync(string text, Action<object> DoneCallback)
{
threadCount++;
ThreadPool.QueueUserWorkItem((x) =>
{
lock (_lock)
{
if (threadCount > 1) // disable executing threads at same time
{
threadCount--;
return; // if a new thread is created exit.
// let the newer task do work!
}
// do work in here
Application.Current.Dispatcher.BeginInvoke(new Action(() =>
{
threadCount--;
DoneCallback(Foo);
}));
}
},text);
}
I can have a maximum of 5 threads running simultaneous at any one time which makes use of 5 separate hardware to speedup the computation of some complex calculations and return the result. The API (contains only one method) for each of this hardware is not thread safe and can only run on a single thread at any point in time. Once the computation is completed, the same thread can be re-used to start another computation on either the same or a different hardware depending on availability. Each computation is stand alone and does not depend on the results of the other computation. Hence, up to 5 threads may complete its execution in any order.
What is the most efficient C# (using .Net Framework 2.0) coding solution for keeping track of which hardware is free/available and assigning a thread to the appropriate hardware API for performing the computation? Note that other than the limitation of 5 concurrently running threads, I do not have any control over when or how the threads are fired.
Please correct me if I am wrong but a lock free solution is preferred as I believe it will result in increased efficiency and a more scalable solution.
Also note that this is not homework although it may sound like it...
.NET provides a thread pool that you can use. System.Threading.ThreadPool.QueueUserWorkItem() tells a thread in the pool to do some work for you.
Were I designing this, I'd not focus on mapping threads to your HW resources. Instead I'd expose a lockable object for each HW resource - this can simply be an array or queue of 5 Objects. Then for each bit of computation you have, call QueueUserWorkItem(). Inside the method you pass to QUWI, find the next available lockable object and lock it (aka, dequeue it). Use the HW resource, then re-enqueue the object, exit the QUWI method.
It won't matter how many times you call QUWI; there can be at most 5 locks held, each lock guards access to one instance of your special hardware device.
The doc page for Monitor.Enter() shows how to create a safe (blocking) Queue that can be accessed by multiple workers. In .NET 4.0, you would use the builtin BlockingCollection - it's the same thing.
That's basically what you want. Except don't call Thread.Create(). Use the thread pool.
cite: Advantage of using Thread.Start vs QueueUserWorkItem
// assume the SafeQueue class from the cited doc page.
SafeQueue<SpecialHardware> q = new SafeQueue<SpecialHardware>()
// set up the queue with objects protecting the 5 magic stones
private void Setup()
{
for (int i=0; i< 5; i++)
{
q.Enqueue(GetInstanceOfSpecialHardware(i));
}
}
// something like this gets called many times, by QueueUserWorkItem()
public void DoWork(WorkDescription d)
{
d.DoPrepWork();
// gain access to one of the special hardware devices
SpecialHardware shw = q.Dequeue();
try
{
shw.DoTheMagicThing();
}
finally
{
// ensure no matter what happens the HW device is released
q.Enqueue(shw);
// at this point another worker can use it.
}
d.DoFollowupWork();
}
A lock free solution is only beneficial if the computation time is very small.
I would create a facade for each hardware thread where jobs are enqueued and a callback is invoked each time a job finishes.
Something like:
public class Job
{
public string JobInfo {get;set;}
public Action<Job> Callback {get;set;}
}
public class MyHardwareService
{
Queue<Job> _jobs = new Queue<Job>();
Thread _hardwareThread;
ManualResetEvent _event = new ManualResetEvent(false);
public MyHardwareService()
{
_hardwareThread = new Thread(WorkerFunc);
}
public void Enqueue(Job job)
{
lock (_jobs)
_jobs.Enqueue(job);
_event.Set();
}
public void WorkerFunc()
{
while(true)
{
_event.Wait(Timeout.Infinite);
Job currentJob;
lock (_queue)
{
currentJob = jobs.Dequeue();
}
//invoke hardware here.
//trigger callback in a Thread Pool thread to be able
// to continue with the next job ASAP
ThreadPool.QueueUserWorkItem(() => job.Callback(job));
if (_queue.Count == 0)
_event.Reset();
}
}
}
Sounds like you need a thread pool with 5 threads where each one relinquishes the HW once it's done and adds it back to some queue. Would that work? If so, .Net makes thread pools very easy.
Sounds a lot like the Sleeping barber problem. I believe the standard solution to that is to use semaphores
I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.
this kind of follows on from another question of mine.
Basically, once I have the code to access the file (will review the answers there in a minute) what would be the best way to test it?
I am thinking of creating a method which just spawns lots of BackgroundWorker's or something and tells them all load/save the file, and test with varying file/object sizes. Then, get a response back from the threads to see if it failed/succeeded/made the world implode etc.
Can you guys offer any suggestions on the best way to approach this? As I said before, this is all kinda new to me :)
Edit
Following ajmastrean's post:
I am using a console app to test with Debug.Asserts :)
Update
I originally rolled with using BackgroundWorker to deal with the threading (since I am used to that from Windows dev) I soon realised that when I was performing tests where multiple ops (threads) needed to complete before continuing, I realised it was going to be a bit of a hack to get it to do this.
I then followed up on ajmastrean's post and realised I should really be using the Thread class for working with concurrent operations. I will now refactor using this method (albeit a different approach).
In .NET, ThreadPool threads won't return without setting up ManualResetEvents or AutoResetEvents. I find these overkill for a quick test method (not to mention kind of complicated to create, set, and manage). Background worker is a also a bit complex with the callbacks and such.
Something I have found that works is
Create an array of threads.
Setup the ThreadStart method of each thread.
Start each thread.
Join on all threads (blocks the current thread until all other threads complete or abort)
public static void MultiThreadedTest()
{
Thread[] threads = new Thread[count];
for (int i = 0; i < threads.Length; i++)
{
threads[i] = new Thread(DoSomeWork());
}
foreach(Thread thread in threads)
{
thread.Start();
}
foreach(Thread thread in threads)
{
thread.Join();
}
}
#ajmastrean, since unit test result must be predictable we need to synchronize threads somehow. I can't see a simple way to do it without using events.
I found that ThreadPool.QueueUserWorkItem gives me an easy way to test such use cases
ThreadPool.QueueUserWorkItem(x => {
File.Open(fileName, FileMode.Open);
event1.Set(); // Start 2nd tread;
event2.WaitOne(); // Blocking the file;
});
ThreadPool.QueueUserWorkItem(x => {
try
{
event1.WaitOne(); // Waiting until 1st thread open file
File.Delete(fileName); // Simulating conflict
}
catch (IOException e)
{
Debug.Write("File access denied");
}
});
Your idea should work fine. Basically you just want to spawn a bunch of threads, and make sure the ones writing the file take long enough to do it to actually make the readers wait. If all of your threads return without error, and without blocking forever, then the test succeeds.