using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ThreadExample
{
public class Info
{
public int Counter;
private static object _lock = new object();
private List<Thread> ThreadList;
public Info(int counter)
{
Counter = counter;
ThreadList = new List<Thread>();
ThreadList.Add(new Thread(ThreadBody));
ThreadList.Add(new Thread(ThreadBody));
ThreadList[0].Name = "t1";
ThreadList[1].Name = "t2";
}
public void Start()
{
ThreadList.ForEach(t => t.Start(t.Name));
}
public void ThreadBody(object name)
{
while (Counter != 20)
{
lock (_lock)
{
Counter++;
Console.WriteLine("Thread {0} : the value of the counter is {1}", name.ToString(), Counter);
}
}
}
}
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ThreadExample
{
class Program
{
static void Main(string[] args)
{
Info info = new Info(0);
info.Start();
}
}
}
if the lock is just lock counter ++
lock (_lock)
{
Counter++;
}
I don't have an infinite loop, but if the lock is like in example, it run infinite loop
It could be that when Counter gets to 19 both threads enter the loop and it ends up getting incremented to 21 before they test the value again.
You'll need to hold the lock while reading the value of Counter. A double check of Counter might be adequate (re-read it again inside the while loop while holding the lock). However, I'm not sure about this because my head just can't keep track of all the details of various threading memory models between native, .NET, Java, and whatever. Even on .NET the ECMA model is apparently different than what MS guarantees for their CLR (see http://msdn.microsoft.com/en-us/magazine/cc163715.aspx and http://www.bluebytesoftware.com/blog/PermaLink,guid,543d89ad-8d57-4a51-b7c9-a821e3992bf6.aspx). For more details on why double-checking might or might not work, search for "double checked locking" - there's an awful lot of complexity behind something that apparently should be simple.
For example, here's snippet of a run on my machine:
Thread t1 : the value of the counter is 1
Thread t2 : the value of the counter is 2
Thread t2 : the value of the counter is 3
Thread t2 : the value of the counter is 4
Thread t2 : the value of the counter is 5
Thread t2 : the value of the counter is 6
Thread t2 : the value of the counter is 7
Thread t2 : the value of the counter is 8
Thread t2 : the value of the counter is 9
Thread t2 : the value of the counter is 10
Thread t2 : the value of the counter is 11
Thread t2 : the value of the counter is 12
Thread t2 : the value of the counter is 13
Thread t2 : the value of the counter is 14
Thread t2 : the value of the counter is 15
Thread t2 : the value of the counter is 16
Thread t2 : the value of the counter is 17
Thread t2 : the value of the counter is 18
Thread t2 : the value of the counter is 19
Thread t2 : the value of the counter is 20
Thread t1 : the value of the counter is 21
Thread t1 : the value of the counter is 22
... Thread t1 never stops ...
You'll notice that t2 stops once it gets Counter to 20, but the t1 doesn't notice that. It's already entered the loop (or decided to enter the loop) thinking that Counter is 1 (or maybe 2 or something else - just not 20).
The problem is your line here:
while (Counter != 20)
Since you're locking around the increment to counter, at some point, Counter can equal 19. Both threads can do the check, then increment counter internally, making it 21 before the threads check again.
That being said, even if the two threads don't hit that at the same time, one thread may see the 20 and stop, while the other thread has a value of 21 when that is hit, and the loop will continue forever.
Your "fix" (locking just the increment) doesn't really fix it, btw - it just makes the error case less likely. The reason for this is the Console.WriteLine call is much, much slower, so more of the processing time is happening in your lock, making it more likely that the threads increment past your condition check before they see it again. However, this could still occur with just locking the counter increment (though it'd be more rare.)
You could easily correct this by having a more flexible condition, such as:
while (Counter < 20)
This will cause the threads to exit as soon as it hits 20 or higher.
The way your code is written, it's possible for both threads to increment Counter before their respective while clause gets evaluated. In that case, Counter can go from 19 to 21 before the next while is hit.
Try refactoring your loop into something like:
while (true) {
lock (_lock) {
Counter++;
Console.WriteLine("Thread {0} : the value of the counter is {1}",
name.ToString(), Counter);
if (Counter >= 20) {
break;
}
}
}
Related
The code successfully build no compilation error however nothing iteration on runtime. I Stopped the loop iteration at 200 so loop will not proceed further but loop does not execute iteration lower than < 200.
I am not sure. Is there anything alternative of Stop I can use to fix this code?
Why Iterations with lower index is not performed?
How to fix this issue. I googled stuff but all vain.
Please consider the following code.
static void Main(string[] args)
{
var values= Enumerable.Range(0, 500).ToArray();
ParallelLoopResult result = Parallel.For(0, values.Count(),
(int i, ParallelLoopState loopState) => {
if (i == 200)
loopState.Stop();
WorkOnItem(values[i]);
});
Console.WriteLine(result);
}
static void WorkOnItem(object value) {
System.Console.WriteLine("Started working on: " + value);
Thread.Sleep(100);
System.Console.WriteLine("Finished working on: " + value); }
Any help to solve this issue would be appreciated. Thanks
You should call loopState.Break() instead of loopState.Stop().
From the documentation of ParallelLoopState.Break method:
Break indicates that no iterations after the current iteration should be run. It effectively cancels any additional iterations of the loop. However, it does not stop any iterations that have already begun execution. For example, if Break is called from the 100th iteration of a parallel loop iterating from 0 to 1,000, all iterations less than 100 should still be run, but the iterations from 101 through to 1000 that have not yet started are not executed.
I have the following tasks, they share the sum variable and at the end the sum should be 9, but I get 3. Can you please help me how to fix it. Many thanks.
int sum = 0;
Task t1 = Task.Factory.StartNew(() =>
{
sum = sum + Computation();
});
Task t2 = Task.Factory.StartNew(() =>
{
sum = sum + Computation();
});
Task t3 = Task.Factory.StartNew(() =>
{
sum = sum + Computation();
});
Task.WaitAll(t1, t2, t3);
Console.WriteLine($"The sum is {sum}");
private static int Computation()
{
return 3;
}
It's because you're writing the same field from multiple threads at the same time.
Use Interlocked.Add from System.Threading, which will prevent each thread from writing the variable at the same exact moment.
int sum = 0;
Task t1 = Task.Factory.StartNew(() =>
{
Interlocked.Add(ref sum,Computation());
});
Task t2 = Task.Factory.StartNew(() =>
{
Interlocked.Add(ref sum,Computation());
});
Task t3 = Task.Factory.StartNew(() =>
{
Interlocked.Add(ref sum,Computation());
});
Task.WaitAll(t1, t2, t3);
Console.WriteLine($"The sum is {sum}");
You never tell your code to wait until task 't1' is finished until you start 't2', etc, so everything executes in parallel. Each thread reads the value in "sum" (initially 0) and adds 3. So 0+3 = 3. After that it then writes back the 3. So the code does exactly you programmed it to do.
Galister explained how you could add locks (one side note on this comments: operations in a computer almost never happen at exactly the same moment ;) )
Interlocked class are great when atomic operations are required but if you care about performance consider combine it with Thread Local Storage (TLS).
The Parallel.For has a unique overloads for them, documantation.
Example:
int sum = 0;
Parallel.For(1, 3,
() => 0, //The type of the thread-local data.
(x, state, tls) => // The delegate that is invoked once per iteration.
{
tls += x;
return tls;
},
partial => //The delegate that performs a final action on the local state of each task.
{
Interlocked.Add(ref sum, partial);
});
Do consider that for a small loops, it does not matter, and there will be no actual difference between using the Thread Local Storage and `Interlocked. For big loops, it will make a difference, using lock in big loops can cause serious overhead (blog):
This will potentially add a huge amount of overhead to our
calculation. Since we can potentially block while waiting on the lock
for every single iteration, we will most likely slow this down to
where it is actually quite a bit slower than our serial
implementation. The problem is the lock statement – any time you use
lock(object), you’re almost assuring reduced performance in a parallel
situation. When parallelizing a routine, try to avoid locks.
The idea is to reduce the acquire a lock on the sum variable, note that every task is trying to acquire a lock it in every single point of time. Using the Thread Local Storage making the sum variable to be locked as much only as the number of threads.
instead of synchronizing once per element (potentially millions of
times), you’ll only have to synchronize once per thread
I'm trying to test the C# parallel methods and this is my test program:
class Program
{
static int counter;
static void Main(string[] args)
{
counter = 0;
Parallel.Invoke(
() => func(1),
() => func(2),
() => func(3)
);
Console.Read();
}
static void func(int num)
{
for (int i = 0; i < 5;i++ )
{
Console.WriteLine(string.Format("This is function #{0} loop. counter - {1}", num, counter));
counter++;
}
}
}
What I tried to do is to have 1 static shared variable and each function instance will increase it by 1.
I expected that counter will be printed in order (1,2,3,...)
But the output is surprising:
This is function #1 loop. counter - 0
This is function #1 loop. counter - 1
This is function #1 loop. counter - 2
This is function #1 loop. counter - 3
This is function #1 loop. counter - 4
This is function #3 loop. counter - 5
This is function #2 loop. counter - 1
This is function #3 loop. counter - 6
This is function #3 loop. counter - 8
This is function #3 loop. counter - 9
This is function #3 loop. counter - 10
This is function #2 loop. counter - 7
This is function #2 loop. counter - 12
This is function #2 loop. counter - 13
This is function #2 loop. counter - 14
Can anyone explain to me why this is happening?
The problem is that your code is not thread-safe. For example, what can happen is this:
function #2 gets the value of counter to use it in Console.WriteLine()
function #1 gets the value of counter, calls Console.WriteLine(), increments counter
function #1 gets the value of counter, calls Console.WriteLine(), increments counter
function #2 finally calls Console.WriteLine() with the old value
Also, ++ by itself is not thread-safe, so the final value may not be 15.
To fix both of these issues, you can use Interlocked.Increment():
for (int i = 0; i < 5; i++)
{
int incrementedCounter = Interlocked.Increment(ref counter);
Console.WriteLine("This is function #{0} loop. counter - {1}", num, incrementedCounter);
}
This way, you will get the number after increment, not before, as in your original code. Also, this code still won't print the numbers in the correct order, but you can be sure that each number will be printed exactly once.
If you do want to have the numbers in the correct order, you will need to use lock:
private static readonly object lockObject = new object();
…
for (int i = 0; i < 5; i++)
{
lock (lockObject)
{
Console.WriteLine("This is function #{0} loop. counter - {1}", num, counter);
counter++;
}
}
Of course, if you do this, you won't actually get any parallelism, but I assume this is not your real code.
Actually what happens - Invoke just queues up these tasks, and runtime assigns threads for these tasks, what gives alot of random element to it (which one is going to get picked up first etc).
Even msdn article states this:
This method can be used to execute a set of operations, potentially in parallel.
No guarantees are made about the order in which the operations execute or whether they execute in parallel. This method does not return until each of the provided operations has completed, regardless of whether completion occurs due to normal or exceptional termination.
This problem looks like that many threads access the same variable. This is a problem of concurrency.
You can try it:
static object syncObj = new object();
static void func(int num)
{
for (int i = 0; i < 5; i++)
{
lock (syncObj)
{
Console.WriteLine(string.Format("This is function #{0} loop. counter - {1}", num, counter));
counter++;
}
}
}
I have some code that loops through a list of records, starts an export task for each one, and increases a progress counter by 1 each time a task finishes so the user knows how far along the process is.
But depending on the timing of my loops, I often see the output showing a higher number before a lower number.
For example, I would expect to see output like this:
Exporting A
Exporting B
Exporting C
Exporting D
Exporting E
Finished 1 / 5
Finished 2 / 5
Finished 3 / 5
Finished 4 / 5
Finished 5 / 5
But instead I get output like this
Exporting A
Exporting B
Exporting C
Exporting D
Exporting E
Finished 1 / 5
Finished 2 / 5
Finished 5 / 5
Finished 4 / 5
Finished 3 / 5
I don't expect the output to be exact since I'm not locking the value when I update/use it (sometimes it outputs the same number twice, or skips a number), however I wouldn't expect it to go backwards.
My test data set is 72 values, and the relevant code looks like this:
var tasks = new List<Task>();
int counter = 0;
StatusMessage = string.Format("Exporting 0 / {0}", count);
foreach (var value in myValues)
{
var valueParam = value;
// Create async task, start it, and store the task in a list
// so we can wait for all tasks to finish at the end
tasks.Add(
Task.Factory.StartNew(() =>
{
Debug.WriteLine("Exporting " + valueParam );
System.Threading.Thread.Sleep(500);
counter++;
StatusMessage = string.Format("Exporting {0} / {1}", counter, count);
Debug.WriteLine("Finished " + counter.ToString());
})
);
}
// Begin async task to wait for all tasks to finish and update output
Task.Factory.StartNew(() =>
{
Task.WaitAll(tasks.ToArray());
StatusMessage = "Finished";
});
The output can appear backwards in both the debug statements and the StatusMessage output.
What's the correct way to keep count of how many async tasks in a loop are completed so that this problem doesn't occur?
You get mixed output, because counter is not incremented in the same order as Debug.WriteLine(...) method is executed.
To get a consistent progress report, you can introduce a reporting lock into the task
tasks.Add(
Task.Factory.StartNew(() =>
{
Debug.WriteLine("Exporting " + valueParam );
System.Threading.Thread.Sleep(500);
lock(progressReportLock)
{
counter++;
StatusMessage = string.Format("Exporting {0} / {1}", counter, count);
Debug.WriteLine("Finished " + counter.ToString());
}
})
);
In this sample the counter variable represents shared state among several threads. Using the ++ operator on shared state is simply unsafe and will give you incorrect results. It essentially boils down to the following instructions
push counter to stack
push 1 to stack
add values on the stack
store into counter
Because multiple threads are executing this statement it's possible for one to interrupt the other partway through completing the above sequence. This would cause the incorrect value to end up in counter.
Instead of ++ use the following statement
Interlocked.Increment(ref counter);
This operation is specifically designed to update state which may be shared among several threads. The interlocked will happen atomically and won't suffer from the race conditions I outlined
The actual out of order display of values suffers from a similar problem even after my suggested fix. The increment and display operation aren't atomic and hence one thread can interrupt the other in between the increment and display. If you want the operations to be un-interruptable by other threads then you will need to use a lock.
object lockTarget = new object();
int counter = 0;
...
lock (lockTarget) {
counter++;
StatusMessage = string.Format("Exporting {0} / {1}", counter, count);
Debug.WriteLine("Finished " + counter.ToString());
}
Note that because the increment of counter now occurs inside the lock there is no longer a need to use Interlocked.Increment
My goal is the following:
There is a certain range of integers, and I have to test every integer in that range for something random. I want to use multiple threads for this, and divide the work equally among the threads using a shared counter. I set the counter at the beginning value, and let every thread take a number, increase it, do some calculations, and return a result. This shared counter has to be incremented with locks, because otherwise there will be gaps / overlaps in the range of integers to test.
I have no idea where to start. Let's say I want 12 threads to do the work, I do:
for (int t = 0; t < threads; t++)
{
Thread thr = new Thread(new ThreadStart(startThread));
}
startThread() isthe method I use for the calculations.
Can you help me on my way? I know I have to use the Interlocked class, but that's all….
Say you have an int field somewhere (initialized to -1 initially) then:
int newVal = Interlocked.Increment(ref theField);
is a thread-safe increment; assuming you don't mind the (very small) risk of overflowing the upper int limit, then:
int next;
while((next = Interlocked.Increment(ref theField)) <= upperInclusive) {
// do item with index "next"
}
However, Parallel.For will do all of this a lot more conveniently:
Parallel.For(lowerInclusive, upperExclusive, i => DoWork(i));
or (to constrain to 12 threads):
var options = new ParallelOptions { MaxDegreeOfParallelism = 12 };
Parallel.For(lowerInclusive, upperExclusive, options, i => DoWork(i));