How to use multiple threads of the same class in C# - c#

I have a class with a bunch of methods in. For example
private int forFunction(String exceptionFileList, FileInfo z, String compltetedFileList, String sourceDir)
{
atPDFNumber++;
exceptionFileList = "";
int blankImage = 1;
int pagesMissing = 0;
//delete the images currently in the folder
deleteCreatedImages();
//Get the amount of pages in the pdf
int numberPDFPage = numberOfPagesPDF(z.FullName);
//Convert the pdf to images on the users pc
convertToImage(z.FullName);
//Check the images for blank pages
blankImage = testPixels(#"C:\temp", z.FullName);
//Check if the conversion couldnt convert a page because of an error
pagesMissing = numberPDFPage - numberOfFiles;
}
Now what im trying now is to access that class in a thread.. but not just one thread, maybe about 5 threads to speed up processing, since one is a bit slow.
Now in my mind, thats going to be chaos... i mean one thread changing variables while the other thread is busy with them etc etc, and locking each and every variable in all of those methods... not going to have a good time...
So what im proposing, and dont know if its the right way.. is this
public void MyProc()
{
if (this method is open, 4 other threads must wait)
{
mymethod(var,var);
}
if (this method is open, 4 other threads must wait and done with first method)
{
mymethod2();
}
if (this method is open, 4 other threads must wait and done with first and second method)
{
mymethod3();
}
if (this method is open, 4 other threads must wait and done with first and second and third method)
{
mymethod4();
}
}
Would this be the right way to approach the problem of multiple threads accessing multiple methods at the same time?
Those threads will only access the Class 5 times, and no more, since the work load will be equally divided.

Yes, that is one of your options. The conditional expression you have, however, should be replaced using the lock statement, or even better, make the method synchronized:
[MethodImpl(MethodImplOptions.Synchronized)]
private int forFunction(String exceptionFileList, FileInfo z, String compltetedFileList, String sourceDir)
It is not really a conditional because there is nothing conditional going on here. The next coming thread must wait and then it must go on. It literally sleeps without executing any instructions and then it is woken up from the outside.
Note also that when you are worried about variables messed up during the parallel execution of a non-synchronized method, this only applies to member variables (class fields). It does not apply to local variables declared inside the method as each thread would have its own copy of those.

Related

C# shared variable access from different threads

I am using a static variables to get access between threads, but is taking so long to get their values.
Context: I have a static class Results.cs, where I store the result variables of two running Process.cs instances.
public static int ResultsStation0 { get; set; }
public static int ResultsStation1 { get; set; }
Then, a function of the two process instances is called at the same time, with initial value of ResultsStation0/1 = -1.
Because the result will be provided not at the same time, the function is checking that both results are available. The fast instance will set the result and await for the result of the slower instance.
void StationResult(){
Stopwatch sw = new Stopwatch();
sw.Restart();
switch (stationIndex) //Set the result of the station thread
{
case 0: Results.ResultsStation0 = 1; break;
case 1: Results.ResultsStation1 = 1; break;
}
//Waits to get the results of both threads
while (true)
{
if (Results.ResultsStation0 != -1 && Results.ResultsStation1 != -1)
{
break;
}
}
Trace_Info("GOT RESULTS " + stationIndex + "Time: " + sw.ElapsedMilliseconds.ToString() + "ms");
if (Results.ResultsStation0 == 1 && Results.ResultsStation1 == 1)
{
//set OK if both results are OK
Device.profinet.WritePorts(new Enum[] { NOK, OK },
new int[] { 0, 1 });
}
}
It works, but the problem is that the value of sw of the thread that awaits, should be 1ms more or less. I am getting 1ms sometimes, but most of the times I have values up to 80ms.
My question is: why it takes that much if they are sharing the same memory (I guess)?
Is this the right way to access to a variable between threads?
Don't use this method. Global mutable state is bad enough. Mixing in multiple threads sounds like a recipe for unmaintainable code. Since there is no synchronization at all in sight there is no real guarantee that your program may ever finish. On a single CPU system your loop will prevent any real work from actually being done until the scheduler picks one of the worker threads to run, an even on multi core system you will waste a ton of CPU cycles.
If you really want global variables, these should be something that can signal the completion of the operation, i.e. a Task, or ManualResetEvent. That way you can get rid of your horrible spin-wait, and actually wait for each task to complete.
But I would highly recommend to get rid of the global variables and just use standard task based programming:
var result1 = Task.Run(MyMethod1);
var result2 = Task.Run(MyMethod2);
await Task.WhenAll(new []{result1, result2});
Such code is much easier to reason about and understand.
Multi threaded programming is difficult. There are a bunch of new ways your program can break, and the compiler will not help you. You are lucky if you even get an exception, in many cases you will just get an incorrect result. If you are unlucky you will only get incorrect results in production, not in development or testing. So you should read a fair amount about the topic so that you are at least familiar with the common dangers and the ways to mitigate them.
You are using flags as signaling for this you have a class called AutoResetEvent.
There's a difference between safe access and synchronization.
For safe access (atomic) purpose you can use the class Interlocked
For synchronization you use mutex based solutions - either spinlocks, barriers, etc...
What it looks like is you need a synchronization mechanism because you relay on an atomic behavior to signal a process that it is done.
Further more,
For C# there's the async way to do things and that is to call await.
It is Task based so in case you can redesign your flow to use Tasks instead of Threads it will suit you more.
Just to be clear - atomicity means you perform the call in one go.
So for example this is not atomic
int a = 0;
int b = a; //not atomic - read 'a' and then assign to 'b'.
I won't teach you everything to know about threading in C# in one post answer - so my advice is to read the MSDN articles about threading and tasks.

Multithreaded program logic

My professor gave me this semi-pseudo code. He said I should find a mistake somewhere in this code's logic. At the moment I can not find anything, what could be wrong. Could you please give me some hints about what could be wrong? I'm not asking for an answer because I would like to find it myself, but some hints in what direction I should look would be awesome.
class Program
{
int progressValue = 0;
int totalFiles = 0;
int i = 0;
bool toContinue = true;
void MasterThread()
{
Thread thread1 = new Thread(Worker1);
Thread thread2 = new Thread(Worker2);
Thread progressThread = new Thread(ProgressThread);
thread1.Start();
thread2.Start();
progressThread.Start();
}
void Worker1()
{
string[] files = Directory.GetFiles(#"C:\test1");
totalFiles += files.Length;
foreach (string file in files)
{
Encryption.Encrypt(file);
i++;
progressValue = 100 * i / totalFiles;
}
toContinue = false;
}
void Worker2()
{
string[] files = Directory.GetFiles(#"C:\test2");
totalFiles += files.Length;
foreach (string file in files)
{
Encryption.Encrypt(file);
i++;
progressValue = 100 * i / totalFiles;
}
toContinue = false;
}
void ProgressThread()
{
while (toContinue == true)
{
Update(progressValue);
Thread.Sleep(500);
}
}
}
toContinue = false;
This is set at the end of the first completing thread - this will cause the ProgressThread to cease as soon as the first thread completes, not when both threads complete. There should be two separate thread completion flags and both should be checked.
To add to the good answers already provided, I'm being a little thorough, but the idea is to learn.
Exception Handling
Could be issues with exception handling. Always check your program for places where there could be an unexpected result.
How will this code behave if the value of this variable is not what we are expecting?
What will happen if we divide by zero?
Things like that.
Have a look at where variables are initialized and ask yourself is there a possibility that this might not initialize in the way it's expected?
Exception Handling (C# Programming Guide)
Method Calls
Also check out any libraries being used in the code. e.g. Encryption.
Ask yourself, are these statements going to give me an expected result?
e.g.
string[] files = Directory.GetFiles(#"C:\test1");
Will this return an array of strings?
Is this how I should initialise an array of strings?
Question the calls:
e.g.
Update(progressValue);
What does this really do?
Class Library
Threading
How will it work calling three threads like that.
Do they need to be coordinated?
Should threads sleep, to allow other actions to complete?
Also as for accessing variables from different threads.
Is there going to be a mess of trying to track the value of that variable?
Are they being overwritten?
Thread Class
How to: Create and Terminate Threads (C# Programming Guide)
Naming Conventions
On a smaller note there are issues with naming conventions in C#. The use of implicit typing using the generic var is preferred to explicit type declarations in C#.
C# Coding Conventions (C# Programming Guide)
I am not saying that there are issues with all these points, but if you investigate all of these and the points made in the other answers, you will find all the errors and you will obtain a better understanding of the code you are reading.
Here are the items:
There is nothing holding on to the "MasterThread" - so it's hard to tell if the program will instantly end or not.
Access is made to totalFiles from two threads and if both do so at the same time then it is possible that one or the other may win (or both may partially update the value) so there's no telling if you have a valid value or not. Interlocked.Add(ref totalFiles, files.Length); should be used instead.
Both worker threads also update i which could also become corrupted. Interlocked.Increment(ref i); should be used instead.
There is no telling if Encryption.Encrypt is thread-safe. Possibly a lock should be used.
The loop in ProgressThread is bad - Thread.Sleep should always be avoided - it is better to have an explicit update call (or other mechanism) to update progress.
There is no telling if Update(progressValue); is thread-safe. Possibly a lock should be used.
There are a few; I'll just enumerate the two obvious ones, I'm supposing this is not an exercise of how to code precise and correct multithreaded code.
You should ask yourself the following questions:
progressValue should measure progress from zero to hundred of the total work (a progress value equal to 150 seems a little off, doesn't it?). Is it really doing that?
You should not stop updating progressValue (Update(progressValue)) until all the work is done. Are you really doing that?
I don't know too much about multithreading but ill try to give a coupe of hints.
First look at the global variables, what happens when you access the same variable in different threads?
Besides the hints on the other answer I can't find anything else "wrong".

A value from one thread influencing the path of another thread

I have a program which uses two client threads and a server. There is a point in my program where I want a value in one thread to influence the path of another thread.
More specifically, I have this code in the server:
class Handler
{
public void clientInteraction(Socket connection, bool isFirstThread, Barrier barrier)
{
string pAnswer = string.Empty;
string endGameTrigger = string.Empty;
//setup streamReaders and streamWriters
while(true) //infinite game loop
{
//read in a question and send to both threads.
pAnswer = sr.ReadLine();
Console.WriteLine(pAnswer);
awardPoints();
writeToConsole("Press ENTER to ask another question or enter 0 to end the game", isFirstThread);
if(isFirstThread == true)
{
endGameTrigger = Console.ReadLine(); //this is only assigning to one thread...
}
barrier.SignalAndWait();
if(endGameTrigger == "0")//...meaning this is never satisfied in one thread
{
endGame();
}
}
}
}
The boolean value isFirstThread is a value set up in the constructor of the thread to which I can detect which thread was connected first.
Is there some way, or perhaps a threading method, that will allow the second connected thread to detect that the endGameTrigger in the first thread has been set and therefore both threads execute the endGame() method properly.
It's best to be concerned with multithreading
If it's absolutely necessary to start a separate thread for performance/UI reasons
If your code may be running in a multithreaded environment (like a web site) and you need to know that it won't break when multiple threads operate on the same class or same values.
But exercise extreme caution. Incorrect use/handling of multiple threads can cause your code to behave unpredictably and inconsistently. Something will work most of the time and then not work for no apparent reason. Bugs will be difficult to reproduce and identify.
That being said, one of the essential concepts of handling multithreading is to ensure that two threads don't try to update the same value at the same time. They can corrupt or partially modify values in ways that would be impossible for a single thread.
One way to accomplish this is with locking.
private object _lockObject = new Object();
private string _myString;
void SetStringValue(string newValue)
{
lock(_lockObject)
{
_myString = newValue;
}
}
You generally have an object that exists only to serve as a lock. When one thread enters that lock block it acquires a lock on the object. If another thread already has a lock on that object then the next thread just waits for the previous thread to release the lock. That ensures that two threads can't update the value at the same time.
You want to keep the amount of code inside the lock as small as possible so that the lock is released as soon as possible. And be aware that if it gets complicated with multiple locks then two threads can permanently block each other.
For incrementing and updating numbers there are also interlocked operations that handle the locking for you, ensuring that those operations are executed by one thread at a time.
Just for fun I wrote this console app. It takes a sentence, breaks it into words, and then adds each word back onto a new string using multiple threads and outputs the string.
using System;
using System.Threading.Tasks;
namespace FunWithThreading
{
class Program
{
static void Main(string[] args)
{
var sentence =
"I am going to add each of these words to a string "
+ "using multiple threads just to see what happens.";
var words = sentence.Split(' ');
var output = "";
Parallel.ForEach(words, word => output = output + " " + word);
Console.WriteLine(output);
Console.ReadLine();
}
}
}
The first two times I ran it, the output string was exactly what I started with. Great, it works perfectly! Then I got this:
I am going to add of these words to a string using multiple threads just to see what happens. each
Then I ran it 20 more times and couldn't repeat the error. Just imagine the frustration if this was a real application and something unpredictable like this happened even though I tested over and over and over, and then I couldn't get it to happen again.
So the point isn't that multithreading is evil, but just to understand the risks, only introduce it if you need to, and then carefully consider how to prevent threads from interfering with each other.
In response to Luaan's comment. I have put the endGameTrigger as private static string endGameTrigger in the Handler class. Making it a static field instead of a local method variable allows all instances of the handler class (each thread) access to this variable's most recent assignation. Many thanks.

How to do proper Parallel.ForEach, locking and progress reporting

I'm trying to implement the Parallel.ForEach pattern and track progress, but I'm missing something regarding locking. The following example counts to 1000 when the threadCount = 1, but not when the threadCount > 1. What is the correct way to do this?
class Program
{
static void Main()
{
var progress = new Progress();
var ids = Enumerable.Range(1, 10000);
var threadCount = 2;
Parallel.ForEach(ids, new ParallelOptions { MaxDegreeOfParallelism = threadCount }, id => { progress.CurrentCount++; });
Console.WriteLine("Threads: {0}, Count: {1}", threadCount, progress.CurrentCount);
Console.ReadKey();
}
}
internal class Progress
{
private Object _lock = new Object();
private int _currentCount;
public int CurrentCount
{
get
{
lock (_lock)
{
return _currentCount;
}
}
set
{
lock (_lock)
{
_currentCount = value;
}
}
}
}
The usual problem with calling something like count++ from multiple threads (which share the count variable) is that this sequence of events can happen:
Thread A reads the value of count.
Thread B reads the value of count.
Thread A increments its local copy.
Thread B increments its local copy.
Thread A writes the incremented value back to count.
Thread B writes the incremented value back to count.
This way, the value written by thread A is overwritten by thread B, so the value is actually incremented only once.
Your code adds locks around operations 1, 2 (get) and 5, 6 (set), but that does nothing to prevent the problematic sequence of events.
What you need to do is to lock the whole operation, so that while thread A is incrementing the value, thread B can't access it at all:
lock (progressLock)
{
progress.CurrentCount++;
}
If you know that you will only need incrementing, you could create a method on Progress that encapsulates this.
Old question, but I think there is a better answer.
You can report progress using Interlocked.Increment(ref progress) that way you do not have to worry about locking the write operation to progress.
The easiest solution would actually have been to replace the property with a field, and
lock { ++progress.CurrentCount; }
(I personally prefer the look of the preincrement over the postincrement, as the "++." thing clashes in my mind! But the postincrement would of course work the same.)
This would have the additional benefit of decreasing overhead and contention, since updating a field is faster than calling a method that updates a field.
Of course, encapsulating it as a property can have advantages too. IMO, since field and property syntax is identical, the ONLY advantage of using a property over a field, when the property is autoimplemented or equivalent, is when you have a scenario where you may want to deploy one assembly without having to build and deploy dependent assemblies anew. Otherwise, you may as well use faster fields! If the need arises to check a value or add a side effect, you simply convert the field to a property and build again. Therefore, in many practical cases, there is no penalty to using a field.
However, we are living in a time where many development teams operate dogmatically, and use tools like StyleCop to enforce their dogmatism. Such tools, unlike coders, are not smart enough to judge when using a field is acceptable, so invariably the "rule that is simple enough for even StyleCop to check" becomes "encapsulate fields as properties", "don't use public fields" et cetera...
Remove lock statements from properties and modify Main body:
object sync = new object();
Parallel.ForEach(ids, new ParallelOptions {MaxDegreeOfParallelism = threadCount},
id =>
{
lock(sync)
progress.CurrentCount++;
});
The issue here is that ++ is not atomic - one thread can read and increment the value between another thread reading the value and it storing the (now incorrect) incremented value. This is probably compounded by the fact there's a property wrapping your int.
e.g.
Thread 1 Thread 2
reads 5 .
. reads 5
. writes 6
writes 6! .
The locks around the setter and getter don't help this, as there's nothing to stop the lock blocks themseves being called out of order.
Ordinarily, I'd suggest using Interlocked.Increment, but you can't use this with a property.
Instead, you could expose _lock and have the lock block be around the progress.CurrentCount++; call.
It is better to store any database or file system operation in a local buffer variable instead of locking it. locking reduces performance.

How to easy make this counter property thread safe?

I have property definition in class where i have only Counters, this must be thread-safe and this isn't because get and set is not in same lock, How to do that?
private int _DoneCounter;
public int DoneCounter
{
get
{
return _DoneCounter;
}
set
{
lock (sync)
{
_DoneCounter = value;
}
}
}
If you're looking to implement the property in such a way that DoneCounter = DoneCounter + 1 is guaranteed not to be subject to race conditions, it can't be done in the property's implementation. That operation is not atomic, it actually three distinct steps:
Retrieve the value of DoneCounter.
Add 1
Store the result in DoneCounter.
You have to guard against the possibility that a context switch could happen in between any of those steps. Locking inside the getter or setter won't help, because that lock's scope exists entirely within one of the steps (either 1 or 3). If you want to make sure all three steps happen together without being interrupted, then your synchronization has to cover all three steps. Which means it has to happen in a context that contains all three of them. That's probably going to end up being code that does not belong to whatever class contains the DoneCounter property.
It is the responsibility of the person using your object to take care of thread safety. In general, no class that has read/write fields or properties can be made "thread-safe" in this manner. However, if you can change the class's interface so that setters aren't necessary, then it is possible to make it more thread-safe. For example, if you know that DoneCounter only increments and decrements, then you could re-implement it like so:
private int _doneCounter;
public int DoneCounter { get { return _doneCounter; } }
public int IncrementDoneCounter() { return Interlocked.Increment(ref _doneCounter); }
public int DecrementDoneCounter() { return Interlocked.Decrement(ref _doneCounter); }
Using the Interlocked class provides for atomic operations, i.e. inherently threadsafe as in this LinqPad example:
void Main()
{
var counters = new Counters();
counters.DoneCounter += 34;
var val = counters.DoneCounter;
val.Dump(); // 34
}
public class Counters
{
int doneCounter = 0;
public int DoneCounter
{
get { return Interlocked.CompareExchange(ref doneCounter, 0, 0); }
set { Interlocked.Exchange(ref doneCounter, value); }
}
}
If you're expecting not just that some threads will occasionally write to the counter at the same time, but that lots of threads will keep doing so, then you want to have several counters, at least one cache-line apart from each other, and have different threads write to different counters, summing them when you need the tally.
This keeps most threads out of each others ways, which stops them from flushing each others values out of the cores, and slowing each other up. (You still need interlocked unless you can guarantee each thread will stay separate).
For the vast majority of cases, you just need to make sure the occasional bit of contention doesn't mess up the values, in which case Sean U's answer is better in every way (striped counters like this are slower for uncontested use).
What exactly are you trying to do with the counters? Locks don't really do much with integer properties, since reads and writes of integers are atomic with or without locking. The only benefit one can get from locks is the addition of memory barriers; one can achieve the same effect by using Threading.Thread.MemoryBarrier() before and after you read or write a shared variable.
I suspect your real problem is that you are trying to do something like "DoneCounter+=1", which--even with locking--would perform the following sequence of events:
Acquire lock
Get _DoneCounter
Release lock
Add one to value that was read
Acquire lock
Set _DoneCounter to computed value
Release lock
Not very helpful, since the value might change between the get and set. What would be needed would be a method that would perform the get, computation, and set without any intervening operations. There are three ways this can be accomplished:
Acquire and keep a lock during the whole operation
Use Threading.Interlocked.Increment to add a value to _Counter
Use a Threading.Interlocked.CompareExchange loop to update _Counter
Using any of these approaches, it's possible to compute a new value of _Counter based on the old value, in such a fashion that the value written is guaranteed to be based upon the value _Counter had at the time of the write.
You could declare the _DoneCounter variable as "volatile", to make it thread-safe. See this:
http://msdn.microsoft.com/en-us/library/x13ttww7%28v=vs.71%29.aspx

Categories

Resources