Is there a way to enumerate through all background workers? Currently, I've written a small method that I add to as I create new workers. Only one worker should run at a time, so the check method is:
private bool CheckForWorkers() // returns true if any background workers are currently running
{
bool ret = false;
if (bgWorkerFoo.IsBusy || bgWorkerMeh.IsBusy || bgWorkerHmpf.IsBusy || bgWorkerWorkyWorky.IsBusy || bgWorkerOMGStahp.IsBusy)
{
ret = true;
}
return ret;
}
At the time a button is clicked that would start a worker, the click method does this:
if (CheckForWorkers())
{
MessageBox.Show("File generation already in progress. Please wait.", "Message");
return;
}
else
{
bgWorkerFoo.RunWorkerAsync();
}
I'd like to clean up my CheckForWorkers() method so that I don't need to add to it anytime a new worker is created for a different task, but I can't seem to find any way to run through each worker associated with the app. Maybe there isn't a way? Do all of the workers exist (are instantiated) prior to being used?
Why not do something similar to this?
Worker[] Workers => new[] { bgWorkerFoo, bgWorkerMeh, bgWorkerHmpf, bgWorkerWorkyWorky, bgWorkerOMGStahp };
private bool CheckForWorkers()
{
return Workers.Any(w => w != null && w.IsBusy);
}
It's likely you'll need to refer to the collection of workers in the future as well, so it makes sense to put them into a collection anyway
Or, for non-C#6 syntax, a bit uglier:
private Worker[] Workers { get { return new[] { bgWorkerFoo, bgWorkerMeh, bgWorkerHmpf, bgWorkerWorkyWorky, bgWorkerOMGStahp }; } }
private bool CheckForWorkers()
{
return Workers.Any(w => w != null && w.IsBusy);
}
To dynamically get all fields/properties in your class, you can do this:
private IEnumerable<Worker> GetWorkers()
{
var flags = BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public;
var fields = GetType().GetFields(flags);
var fieldValues = fields.Select(f => f.GetValue(this)).OfType<Worker>();
var properties = GetType().GetProperties(flags);
var propertyValues = properties.Select(f => f.GetValue(this)).OfType<Worker>();
return fieldValues.Concat(propertyValues).Where(w => w != null);
}
private bool CheckForWorkers()
{
return GetWorkers().Any(w => w.IsBusy);
}
Might be a good idea to cache GetWorkers(), but it depends on your use-case.
I'm going to suggest an alternative approach that may be appealing. The Microsoft Reactive Framework introduced a a lot of very useful functionality for dealing with concurrency and threads. Primarily the framework is used to deal with event sources in terms of IObservable<T>, but the framework also provides a lot of schedulers for dealing with processing on different threads.
One of the schedulers is called EventLoopScheduler and this allows you to create a scheduler that runs on a background thread and only allows one operation to occur at any one time. Any thread can schedule tasks and tasks can be scheduled immediately or in the future, or even recurringly.
The key point here is that you don't need to track multiple background workers as it doesn't matter how many operations you schedule they'll only run in series.
When using Windows Forms you can use a scheduler called ControlScheduler that allows you to set up a scheduler that will post operations to the UI thread.
Once you have these two set up they become very easy to use.
Try this code:
var form1 = new Form();
var label1 = new Label();
label1.AutoSize = true;
form1.Controls.Add(label1);
form1.Show();
var background = new EventLoopScheduler();
var ui = new ControlScheduler(form1);
var someValue = -1;
background.Schedule(() =>
{
var z = 42 * someValue;
var bgTid = System.Threading.Thread.CurrentThread.ManagedThreadId;
ui.Schedule(() =>
{
var uiTid = System.Threading.Thread.CurrentThread.ManagedThreadId;
label1.Text = $"{z} calc on {bgTid} updated on {uiTid}";
});
});
When I run this code I get this form showing on screen:
Clearly the calculation is correct and it can be seen that the thread ids are different.
You can even do more powerful things like this:
var booking =
background.SchedulePeriodic(0, TimeSpan.FromSeconds(1.0), state =>
{
var copy = state;
if (copy % 2 == 0)
{
ui.Schedule(() => label1.Text = copy.ToString());
}
else
{
background.Schedule(() => ui.Schedule(() => label1.Text = "odd"));
}
return ++state;
});
form1.FormClosing += (s, e) => booking.Dispose();
This code is creating a timer to run every second on the background thread. It uses the state variable to keep track of the number of times that it ran and updates the UI with the value of state when it is even, and otherwise, schedules on its own scheduler some code that will schedule on the UI to update label1.Text with the value "odd". It can get quite sophisticated, but everything is serialized and synchronized for you. Since this created a timer there is a mechanism to shut the timer down and that is calling booking.Dispose().
Of course, since this is using the Reactive Framework you could just use standard observables to do the above, like this:
var query =
from n in Observable.Interval(TimeSpan.FromSeconds(1.0), background)
select n % 2 == 0 ? n.ToString() : "odd";
var booking = query.ObserveOn(ui).Subscribe(x => label1.Text = x);
Notice that the same schedulers are used.
If you need to perform the same task for a (potentially large or unlimited) number of variables, this likely means that they are "homogenious" in some manner and thus should probably be combined into some kind of a "registry" container instead.
E.g. I used to do something like this:
var thread = new BgWorker();
pool.Add(thread);
<...>
foreach (var thread in pool) {
do_something();
}
The specific implementation of the registry van be anything, depending on what you need to do with your objects (e.g. a dictionary if you need to get a specific one in other scope than the one it was created in).
Related
I have a background thread with a long running task.
The background thread searches for files according to given filters.
The task might be running very long so I do not want to wait for task completion in order to show some results.
Additionally, I do not want to lock up my UI thread in order to check if there are new results from the background task.
I would rather want to notify my main thread: "Hey, there is a new search result added".
Kind of like windows explorer, showing search results while the search is still ongoing:
foreach (FileInfo itemToFilter in unfilteredSearchResults)
{
if (extension == ".wav" || extension == ".flac" || extension == ".mp3")
{
// read audio information such as tags etc. Might take some time per file
WaveFileLib.WaveFile.WAVEFile audioItem = new WaveFileLib.WaveFile.WAVEFile(audioFile.FullName);
// Compare audio information such as tags, title, license, date, length etc to filter
if (Filter.AudioItemMatchesFilter(ref audioItem))
{
lock (threadLock)
{
// add search result to list of filtered audio files
this.AudioItemList.Add(audioItem);
}
// notify main thread in order to refresh ui (if applicable)
ParentView.AudioItemAdded();
}
}
}
the main thread can then generate a view from the newly added item.
Previously, this was quite easy with BeginInvoke but with .net core this possibility seems gone.
What are my alternatives / options to notify main thread about updated search results?
As supposed by Panagiotis Kanavos
public class AudioFileSearcher
{
// task caller
public AudioFileSearcher(string searchPath, bool includeSubFolders, Views.SoundListView.SoundList parentView)
{
this.progress1 = new Progress<int>(
{
backgroundWorker1_ProgressChanged();
});
Task.Run(async () => await FindAudioFiles(progress1));
}
// reference to be updated by task and read by event Handler
private Progress<int> progress1 { get; set; }
// do something on task progress
void backgroundWorker1_ProgressChanged()
{
// Update UI on main thread or whatever
}
// Task
public async Task FindAudioFiles(IProgress<int> progress)
{
foreach(AudioItem itemToProcess in allFoundaudioItems)
{
// do long processing in background task
// a new element has been generated, update UI
if (progress != null) progress.Report(0); // ~fire event
}
}
}
works like a charm, items are beeing loaded
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
IObservable<WaveFileLib.WaveFile.WAVEFile> query =
from itemToFilter in unfilteredSearchResults.ToObservable()
where extension == ".wav" || extension == ".flac" || extension == ".mp3"
from audioItem in Observable.Start(() => new WaveFileLib.WaveFile.WAVEFile(audioFile.FullName))
from x in Observable.Start(() =>
{
var a = audioItem;
var flag = Filter.AudioItemMatchesFilter(ref a);
return new { flag, audioItem = a };
})
where x.flag
select x.audioItem;
IDisposable subscription =
query
.ObserveOnDispatcher()
.Subscribe(audioItem =>
{
/* Your code to update the UI here */
});
It was unclear to me whether it's creating a new WAVEFile or the call to AudioItemMatchesFilter that is taking the time so I wrapped both in Observable.Start. You also had a ref call and that makes the code a bit messy.
Nonetheless, this code handles all of the calls in the background and automatically moves the results as they come in to the dispatcher thread.
Hi is there any possible way to get the status of the threads from a Thread.Join, or can i make a breakout from a Thread.Join at a specified period?
For eg:
I have a loop that have n-jobs, i've got 3 free cores for 3 parallel threads, and after Joining the 3 threads, i wonder if there's a way to check if a thread has done it's job to start another job in it's place.
I want to keep the 3 cores working all time, not to wait for all threads to stop and then start another 3 of them.
The simplest, and most likely best, solution is to use the threadpool. The threadpool automatically scales based on available processors and cores.
ThreadPool.QueueUserWorkItem(state => TaskOne());
ThreadPool.QueueUserWorkItem(state => TaskTwo());
ThreadPool.QueueUserWorkItem(state => TaskThree());
ThreadPool.QueueUserWorkItem(state => TaskFour());
If you need to do this the hard way, you could keep a queue of pending tasks and a list of currently running tasks, and use a timeout for the Join() call so that it returns false if the thread is not ready.
I can't think of any reason to prefer the complex to the simple solution, but there might be one, of course.
var MAX_RUNNING = 3;
var JOIN_TIMEOUT_MS = 50;
var waiting = new Queue<ThreadStart>();
var running = new List<Thread>();
waiting.Enqueue(new ThreadStart(TaskOne));
waiting.Enqueue(new ThreadStart(TaskTwo));
waiting.Enqueue(new ThreadStart(TaskThree));
waiting.Enqueue(new ThreadStart(TaskFour));
while (waiting.Any() || running.Any())
{
while (running.Count < MAX_RUNNING && waiting.Any())
{
var next = new Thread(waiting.Dequeue());
next.Start();
running.Add(next);
}
for (var i = running.Count - 1; i >= 0; --i)
{
var t = running[i];
if(t.ThreadState == System.Threading.ThreadState.Stopped) {
running.RemoveAt(i);
break;
}
if (t.Join(JOIN_TIMEOUT_MS))
{
running.RemoveAt(i);
break;
}
}
}
I've been trying to implement a simple producer-consumer pattern using Rx and observable collections. I also need to be able to throttle the number of subscribers easily. I have seen lots of references to LimitedConcurrencyLevelTaskScheduler in parallel extensions but I don't seem to be able to get this to use multiple threads.
I think I'm doing something silly so I was hoping someone could explain what. In the unit test below, I expect multiple (2) threads to be used to consume the strings in the blocking collection. What am I doing wrong?
[TestClass]
public class LimitedConcurrencyLevelTaskSchedulerTestscs
{
private ConcurrentBag<string> _testStrings = new ConcurrentBag<string>();
ConcurrentBag<int> _threadIds= new ConcurrentBag<int>();
[TestMethod]
public void WhenConsumingFromBlockingCollection_GivenLimitOfTwoThreads_TwoThreadsAreUsed()
{
// Setup the command queue for processing combinations
var commandQueue = new BlockingCollection<string>();
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
commandQueue.GetConsumingEnumerable()
.ToObservable(scheduler)
.Subscribe(Go, ex => { throw ex; });
var iterationCount = 100;
for (int i = 0; i < iterationCount; i++)
{
commandQueue.Add(string.Format("string {0}", i));
}
commandQueue.CompleteAdding();
while (!commandQueue.IsCompleted)
{
Thread.Sleep(100);
}
Assert.AreEqual(iterationCount, _testStrings.Count);
Assert.AreEqual(2, _threadIds.Distinct().Count());
}
private void Go(string testString)
{
_testStrings.Add(testString);
_threadIds.Add(Thread.CurrentThread.ManagedThreadId);
}
}
Everyone seems to go through the same learning curve with Rx. The thing to understand is that Rx doesn't do parallel processing unless you explicitly make a query that forces parallelism. Schedulers do not introduce parallelism.
Rx has a contract of behaviour that says zero or more values are produced in series (regardless of how many threads might be used), one after another, with no overlap, finally to be followed by an optional single error or a single complete message, and then nothing else.
This is often written as OnNext*(OnError|OnCompleted).
All that schedulers do is define the rule to determine which thread a new value is processed on if the scheduler has no pending values it is processing for the current observable.
Now take your code:
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
This says that the scheduler will run values for a subscription on one of two threads. But it doesn't mean that it will do this for every value produced. Remember, since values are produced in series, one after another, it is better to re-use an existing thread than to go to the high cost of creating a new thread. So what Rx does is re-use the existing thread if a new value is scheduled on the scheduler before the current value is finished being processed.
This is the key - it re-uses the thread if a new value is scheduled before the processing of existing values is complete.
So your code does this:
commandQueue.GetConsumingEnumerable()
.ToObservable(scheduler)
.Subscribe(Go, ex => { throw ex; });
It means that the scheduler will only create a thread when the first value comes along. But by the time the expensive thread creation operation is complete then the code that adds values to the commandQueue is also done so it has queued them all and hence it can more efficiently use a single thread rather than create a costly second one.
To avoid this you need to construct the query to introduce parallelism.
Here's how:
public void WhenConsumingFromBlockingCollection_GivenLimitOfTwoThreads_TwoThreadsAreUsed()
{
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
var iterationCount = 100;
Observable
.Range(0, iterationCount)
.SelectMany(n => Observable.Start(() => n.ToString(), scheduler)
.Do(x => Go(x)))
.Wait();
(iterationCount == _testStrings.Count).Dump();
(2 == _threadIds.Distinct().Count()).Dump();
}
Now, I've used the Do(...)/.Wait() combo to give you the equivalent of a blocking .Subscribe(...) method.
This results is your asserts both returning true.
I have found that by modifying the subscription as follows I can add 5 subscribers but only two threads will process the contents of the collection so this serves my purpose.
for(int i = 0; i < 5; i++)
observable.Subscribe(Go, ex => { throw ex; });
I'd be interested to know if there is a better or more elegant way to achieve this!
I have an event source which fired by a Network I/O very frequently, based on underlying design, of course the event was always on different thread each time, now I wrapped this event via Rx with: Observable.FromEventPattern(...), now I'm using the TakeWhile(predict) to filter some special event data.
At now, I have some concerns on its thread safety, the TakeWhile(predict) works as a hit and mute, but in concurrent situation, can it still be guaranteed? because I guess the underlying implementation could be(I can't read the source code since it's too complicated...):
public static IObservable<TSource> TakeWhile<TSource>(this IObservable<TSource> source, Func<TSource, bool> predict)
{
ISubject<TSource> takeUntilObservable = new TempObservable<TSource>();
IDisposable dps = null;
// 0 for takeUntilObservable still active, 1 for predict failed, diposed and OnCompleted already send.
int state = 0;
dps = source.Subscribe(
(s) =>
{
/* NOTE here the 'hit and mute' still not thread safe, one thread may enter 'else' and under CompareExchange, but meantime another thread may passed the predict(...) and calling OnNext(...)
* so the CompareExchange here mainly for avoid multiple time call OnCompleted() and Dispose();
*/
if (predict(s) && state == 0)
{
takeUntilObservable.OnNext(s);
}
else
{
// !=0 means already disposed and OnCompleted send, avoid multiple times called via parallel threads.
if (0 == Interlocked.CompareExchange(ref state, 1, 0))
{
try
{
takeUntilObservable.OnCompleted();
}
finally
{
dps.Dispose();
}
}
}
},
() =>
{
try
{
takeUntilObservable.OnCompleted();
}
finally { dps.Dispose(); }
},
(ex) => { takeUntilObservable.OnError(ex); });
return takeUntilObservable;
}
That TempObservable is just a simple implementation of ISubject.
If my guess reasonable, then seems the thread safety can't be guaranteed, means some unexpected event data may still incoming to OnNext(...) because that 'mute' is still on going.
Then I write a simple testing to verify, but out of expectation, the results are all positive:
public class MultipleTheadEventSource
{
public event EventHandler OnSthNew;
int cocurrentCount = 1000;
public void Start()
{
for (int i = 0; i < this.cocurrentCount; i++)
{
int j = i;
ThreadPool.QueueUserWorkItem((state) =>
{
var safe = this.OnSthNew;
if (safe != null)
safe(j, null);
});
}
}
}
[TestMethod()]
public void MultipleTheadEventSourceTest()
{
int loopTimes = 10;
int onCompletedCalledTimes = 0;
for (int i = 0; i < loopTimes; i++)
{
MultipleTheadEventSource eventSim = new MultipleTheadEventSource();
var host = Observable.FromEventPattern(eventSim, "OnSthNew");
host.TakeWhile(p => { return int.Parse(p.Sender.ToString()) < 110; }).Subscribe((nxt) =>
{
//try print the unexpected values, BUT I Never saw it happened!!!
if (int.Parse(nxt.Sender.ToString()) >= 110)
{
this.testContextInstance.WriteLine(nxt.Sender.ToString());
}
}, () => { Interlocked.Increment(ref onCompletedCalledTimes); });
eventSim.Start();
}
// simply wait everything done.
Thread.Sleep(60000);
this.testContextInstance.WriteLine("onCompletedCalledTimes: " + onCompletedCalledTimes);
}
before I do the testing, some friends here suggest me try to use Synchronize<TSource> or ObserveOn to make it thread safe, so any idea on my proceeding thoughts and why the issue not reproduced?
As per your other question, the answer still remains the same: In Rx you should assume that Observers are called in a serialized fashion.
To provider a better answer; Originally the Rx team ensured that the Observable sequences were thread safe, however the performance penalty for well behaved/designed applications was unnecessary. So a decision was taken to remove the thread safety to remove the performance cost. To allow you to opt back into to thread safety you could apply the Synchronize() method which would serialize all method calls OnNext/OnError/OnCompleted. This doesn't mean they will get called on the same thread, but you wont get your OnNext method called while another one is being processed.
The bad news, from memory this happened in Rx 2.0, and you are specifically asking about Rx 1.0. (I am not sure Synchonize() even exists in 1.xx?)
So if you are in Rx v1, then you have this blurry certainty of what is thread safe and what isn't. I am pretty sure the Subjects are safe, but I can't be sure about the factory methods like FromEventPattern.
My recommendation is: if you need to ensure thread safety, Serialize your data pipeline. The easiest way to do this is to use a single threaded IScheduler implementation i.e. DispatcherScheduler or a EventLoopScheduler instance.
Some good news is that when I wrote the book on Rx it did target v1, so this section is very relevant for you http://introtorx.com/Content/v1.0.10621.0/15_SchedulingAndThreading.html
So if your query right now looked like this:
Observable.FromEventPatter(....)
.TakeWhile(x=>x>5)
.Subscribe(....);
To ensure that the pipeline is serialized you can create an EventLoopScheduler (at the cost of dedicating a thread to this):
var scheduler = new EventLoopScheduler();
Observable.FromEventPatter(....)
.ObserveOn(scheduler)
.TakeWhile(x=>x>5)
.Subscribe(....);
I have an application I have already started working with and it seems I need to rethink things a bit. The application is a winform application at the moment. Anyway, I allow the user to input the number of threads they would like to have running. I also allow the user to allocate the number of records to process per thread. What I have done is loop through the number of threads variable and create the threads accordingly. I am not performing any locking (and not sure I need to or not) on the threads. I am new to threading and am running into possible issue with multiple cores. I need some advice as to how I can make this perform better.
Before a thread is created some records are pulled from my database to be processed. That list object is sent to the thread and looped through. Once it reaches the end of the loop, the thread call the data functions to pull some new records, replacing the old ones in the list. This keeps going on until there are no more records. Here is my code:
private void CreateThreads()
{
_startTime = DateTime.Now;
var totalThreads = 0;
var totalRecords = 0;
progressThreadsCreated.Maximum = _threadCount;
progressThreadsCreated.Step = 1;
LabelThreadsCreated.Text = "0 / " + _threadCount.ToString();
this.Update();
for(var i = 1; i <= _threadCount; i++)
{
LabelThreadsCreated.Text = i + " / " + _threadCount;
progressThreadsCreated.Value = i;
var adapter = new Dystopia.DataAdapter();
var records = adapter.FindAllWithLocking(_recordsPerThread,_validationId,_validationDateTime);
if(records != null && records.Count > 0)
{
totalThreads += 1;
LabelTotalProcesses.Text = "Total Processes Created: " + totalThreads.ToString();
var paramss = new ArrayList { i, records };
var thread = new Thread(new ParameterizedThreadStart(ThreadWorker));
thread.Start(paramss);
}
this.Update();
}
}
private void ThreadWorker(object paramList)
{
try
{
var parms = (ArrayList) paramList;
var stopThread = false;
var threadCount = (int) parms[0];
var records = (List<Candidates>) parms[1];
var runOnce = false;
var adapter = new Dystopia.DataAdapter();
var lastCount = records.Count;
var runningCount = 0;
while (_stopThreads == false)
{
if (!runOnce)
{
CreateProgressArea(threadCount, records.Count);
}
else
{
ResetProgressBarMethod(threadCount, records.Count);
}
runOnce = true;
var counter = 0;
if (records.Count > 0)
{
foreach (var record in records)
{
counter += 1;
runningCount += 1;
_totalRecords += 1;
var rec = record;
var proc = new ProcRecords();
proc.Validate(ref rec);
adapter.Update(rec);
UpdateProgressBarMethod(threadCount, counter, emails.Count, runningCount);
if (_stopThreads)
{
break;
}
}
UpdateProgressBarMethod(threadCount, -1, lastCount, runningCount);
if (!_noRecordsInPool)
{
records = adapter.FindAllWithLocking(_recordsPerThread, _validationId, _validationDateTime);
if (records == null || records.Count <= 0)
{
_noRecordsInPool = true;
break;
}
else
{
lastCount = records.Count;
}
}
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
Something simple you could do that would improve perf would be to use a ThreadPool to manage your thread creation. This allows the OS to allocate a group of thread paying the thread create penalty once instead of multiple times.
If you decide to move to .NET 4.0, Tasks would be another way to go.
I allow the user to input the number
of threads they would like to have
running. I also allow the user to
allocate the number of records to
process per thread.
This isn't something you really want to expose to the user. What are they supposed to put? How can they determine what's best? This is an implementation detail best left to you, or even better, the CLR or another library.
I am not performing any locking (and
not sure I need to or not) on the
threads.
The majority of issues you'll have with multithreading will come from shared state. Specifically, in your ThreadWorker method, it looks like you refer to the following shared data: _stopThreads, _totalRecords, _noRecordsInPool, _recordsPerThread, _validationId, and _validationDateTime.
Just because these data are shared, however, doesn't mean you'll have issues. It all depends on who reads and writes them. For example, I think _recordsPerThread is only written once initially, and then read by all threads, which is fine. _totalRecords, however, is both read and written by each thread. You can run into threading issues here since _totalRecords += 1; consists of a non-atomic read-then-write. In other words, you could have two threads read the value of _totalRecords (say they both read the value 5), then increment their copy and then write it back. They'll both write back the value 6, which is now incorrect since it should be 7. This is a classic race condition. For this particular case, you could use Interlocked.Increment to atomically update the field.
In general, to do synchronization between threads in C#, you can use the classes in the System.Threading namespace, e.g. Mutex, Semaphore, and probably the most common, Monitor (equivalent to lock) which allows only one thread to execute a specific portion of code at a time. The mechanism you use to synchronize depends entirely on your performance requirements. For example, if you throw a lock around the body of your ThreadWorker, you'll destroy any performance gains you got through multithreading by effectively serializing the work. Safe, but slow :( On the other hand, if you use Interlocked.Increment and judiciously add other synchronization where necessary, you'll maintain your performance and your app will be correct :)
Once you've gotten your worker method to be thread-safe, you should use some other mechanism to manage your threads. ThreadPool was mentioned, and you could also use the Task Parallel Library, which abstracts over the ThreadPool and smartly determines and scales how many threads to use. This way, you take the burden off of the user to determine what magic number of threads they should run.
The obvious answer is to question why you want threads in the first place? Where is the analysis and benchmarks that show that using threads will be an advantage?
How are you ensuring that non-gui threads do not interact with the gui? How are you ensuring that no two threads interact with the same variables or datastructures in an unsafe way? Even if you realise you do need to use locking, how are you ensuring that the locks don't result in each thread processing their workload serially, removing any advantages that multiple threads might have provided?