How can a progress bar be added to the third loop shown below?
Experimental evidence has shown that of the following three loops the third is fastest. The best performance is when the thread count is the same as the logical processors of the CPU. I think this is due to reduced time spent allocating and releasing thread resources.
This is code to generate a noise map. Sometimes the processing time is long enough that a progress bar is needed.
for (int j = 0; j < data.Length; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
Parallel.For(0, data.Length, (i) => {
var x = (location.X + (i % width));
var y = (location.Y + (i / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[i] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
});
Parallel.For(0, threads, (i) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
}
);
A progress bar that has a limited update rate to a few times a second so as to not waist time drawing a progress bar too often would be the best.
Adding IProgress I have arrived here and this almost works. The problem is that the progress bar updates after the parallel.for has completely finished.
private async Task<int> FillDataParallelAsync(int threads, IProgress<int> progress)
{
int precent = 0;
/// parallel loop - easy and fast.
Parallel.For(0, threads, (i) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
if(j%(data.Length / 100) ==0)
{
if (progress != null)
{
progress.Report(precent);
}
Interlocked.Increment(ref precent);
}
}
}
);
return 0;
}
After working on this for too long it looks like this now.
private Boolean FillDataParallel3D(int threads, CancellationToken token, IProgress<int> progress)
{
int precent = 0;
Vector3 imageCenter = location;
imageCenter.X -= width / 2;
imageCenter.Y -= height / 2;
ParallelOptions options = new ParallelOptions { CancellationToken = token };
/// parallel loop - easy and fast.
try
{
ParallelLoopResult result =
Parallel.For(0, threads, options, (i, loopState) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
if (loopState.ShouldExitCurrentIteration) break;
Vector3 p = imageCenter;
p.X += (j % width);
p.Y += (j / width);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
if (j % (data.Length / 100) == 0)
{
try { if (progress != null) progress.Report(precent); }
catch { }
Interlocked.Increment(ref precent);
}
}
}
);
return result.IsCompleted;
}
catch { }
return false;
}
The progress is incremented in part from each thread up to a total of 100 times. It still has some latency in updating the progress bar but it seems to be unavoidable. For example if the progress bar increments 100 times in less than the time to draw 100 updates the progress seems to que up and continues to progress after the method returns. Suppressing the display of progress for a second after calling the method works well enough. The progress bar is only really useful when the method takes so long that you wonder if anything is happening.
Full project at https://github.com/David-Marsh/Designer
you may want to take a look at the IProgress at MSDN. IProgress was introduced as a standard way for displaying progress. This interface exposes a Report(T) method, which the async task calls to report progress. You expose this interface in the signature of the async method, and the caller must provide an object that implements this interface.
EDIT:
What the threads should report depends on how fine-grained you need your reporting. The easiest approach would be to report progress after each iteration. I'm intentionally writing iteration because the Parallel.For method does not necessarily execute each iteration on a separate thread.
Your progress, probably in percent, is something that is shared by all threads. So calculating current progress in percent and calling the Report method will most likely require locking. Be aware that this will have some performance impacts.
As for calculating the current progress, you know how many iterations you have. You can calculate how much one iteration is compared to the overall work. At the beginning or at the end of each iteration, you simply add the difference to the overall progress.
Here is an example that might helps you solve your problem:
public void ParallelForProgressExample(IProgress<int> progress = null)
{
int percent = 0;
...
var result = Parallel.For(fromInclusive, toExclusive, (i, state) =>
{
// do your work
lock (lockObject)
{
// caluclate percentage
// ...
// update progress
progress?.Report(percent);
}
});
}
As progress, you can either use the System.Progress class or implement the IProgress interface yourself.
This is not related to the main question (the progress bar). I just want to note that the .NET Framework contains a Partitioner class, so there is no need to partition the data manually:
Parallel.ForEach(Partitioner.Create(0, data.Length), range =>
{
for (int j = range.Item1; j < range.Item2; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
});
Related
This is a piece of my code, which calculate the differentiate. It works correctly but it takes a lot (because of height and width).
"Data" is a grey image bitmap.
"Filter" is [3,3] matrix.
"fh" and "fw" maximum values are 3.
I am looking to speed up this code.
I also tried with using parallel, for but it didn't work correct (error with out of bounds).
private float[,] Differentiate(int[,] Data, int[,] Filter)
{
int i, j, k, l, Fh, Fw;
Fw = Filter.GetLength(0);
Fh = Filter.GetLength(1);
float sum = 0;
float[,] Output = new float[Width, Height];
for (i = Fw / 2; i <= (Width - Fw / 2) - 1; i++)
{
for (j = Fh / 2; j <= (Height - Fh / 2) - 1; j++)
{
sum=0;
for(k = -Fw/2; k <= Fw/2; k++)
{
for(l = -Fh/2; l <= Fh/2; l++)
{
sum = sum + Data[i+k, j+l] * Filter[Fw/2+k, Fh/2+l];
}
}
Output[i,j] = sum;
}
}
return Output;
}
For parallel execution you need to drop c language like variable declaration at the beginning of method and declare them in actual scope that they are used so they are not shared between threads. Making it parallel should provide some benefit for performance, but making them all ParallerFors is not a good idea as there is a limit for threads amount that actually can run in parallel. I would try to make it with top level loop only:
private static float[,] Differentiate(int[,] Data, int[,] Filter)
{
var Fw = Filter.GetLength(0);
var Fh = Filter.GetLength(1);
float[,] Output = new float[Width, Height];
Parallel.For(Fw / 2, Width - Fw / 2 - 1, (i, state) =>
{
for (var j = Fh / 2; j <= (Height - Fh / 2) - 1; j++)
{
var sum = 0;
for (var k = -Fw / 2; k <= Fw / 2; k++)
{
for (var l = -Fh / 2; l <= Fh / 2; l++)
{
sum = sum + Data[i + k, j + l] * Filter[Fw / 2 + k, Fh / 2 + l];
}
}
Output[i, j] = sum;
}
});
return Output;
}
This is a perfect example of a task where using the GPU is better than using the CPU. A GPU is able to perform trillions of floating point operations per second (TFlops), while CPU performance is still measured in GFlops. The catch is that it's only any good if you use SIMD instructions (Single Instruction Multiple Data). The GPU excels at data-parallel tasks. If different data needs different instructions, using the GPU has no advantage.
In your program, the elements of your bitmap go through the same calculations: the same computations just with slightly different data (SIMD!). So using the GPU is a great option. This won't be too complex because with your calculations threads on the GPU would not need to exchange information, nor would they be dependent on results of previous iterations (Each element would be processed by a different thread on the GPU).
You can use, for example, OpenCL to easily access the GPU. More on OpenCL and using the GPU here: https://www.codeproject.com/Articles/502829/GPGPU-image-processing-basics-using-OpenCL-NET
So I converted a recursive function to iterative and then used Parallel.ForEach but when I was running it through VTune it was only really using 2 logical cores at for the majority of its run time.
I decided to attempt to use managed threads instead, and converted this code:
for (int N = 2; N <= length; N <<= 1)
{
int maxThreads = 4;
var workGroup = Enumerable.Range(0, maxThreads);
Parallel.ForEach(workGroup, i =>
{
for (int j = ((i / maxThreads) * length); j < (((i + 1) / maxThreads) * length); j += N)
{
for (int k = 0; k < N / 2; k++)
{
int evenIndex = j + k;
int oddIndex = j + k + (N / 2);
var even = output[evenIndex];
var odd = output[oddIndex];
output[evenIndex] = even + odd * twiddles[k * (length / N)];
output[oddIndex] = even + odd * twiddles[(k + (N / 2)) * (length / N)];
}
}
});
}
Into this:
for (int N = 2; N <= length; N <<= 1)
{
int maxThreads = 4;
Thread one = new Thread(() => calculateChunk(0, maxThreads, length, N, output));
Thread two = new Thread(() => calculateChunk(1, maxThreads, length, N, output));
Thread three = new Thread(() => calculateChunk(2, maxThreads, length, N, output));
Thread four = new Thread(() => calculateChunk(3, maxThreads, length, N, output));
one.Start();
two.Start();
three.Start();
four.Start();
}
public void calculateChunk(int i, int maxThreads, int length, int N, Complex[] output)
{
for (int j = ((i / maxThreads) * length); j < (((i + 1) / maxThreads) * length); j += N)
{
for (int k = 0; k < N / 2; k++)
{
int evenIndex = j + k;
int oddIndex = j + k + (N / 2);
var even = output[evenIndex];
var odd = output[oddIndex];
output[evenIndex] = even + odd * twiddles[k * (length / N)];
output[oddIndex] = even + odd * twiddles[(k + (N / 2)) * (length / N)];
}
}
}
The issue is in the fourth thread on the last iteration of the N loop I get a index out of bounds exception for the output array where the index is attempting access the equivalent of the length.
I can not pinpoint the cause using debugging, but I believe it is to do with the threads, I ran the code without the threads and it worked as intended.
If any of the code needs changing let me know, I usually have a few people suggest edits. Thanks for your help, I have tried to sort it myself and am fairly certain the problem is occurring in my threading but I can not see how.
PS: The intended purpose is to parallelize this segment of code.
The observed behaviour is almost certainly due to the use of a captured loop iteration variable N. I can reproduce your situation with a simple test:
ConcurrentBag<int> numbers = new ConcurrentBag<int>();
for (int i = 0; i < 10000; i++)
{
Thread t = new Thread(() => numbers.Add(i));
t.Start();
//t.Join(); // Uncomment this to get expected behaviour.
}
// You'd not expect this assert to be true, but most of the time it will be.
Assert.True(numbers.Contains(10000));
Put simply, your for loop is racing to increment N before the value of N can be copied by the delegate that executes the calculateChunk call. As a result calculateChunk sees almost random values of N going up to (and including) length <<= 1 - that's what's causing your IndexOutOfRangeException.
The output values you'll get will be rubbish too as you can never rely on the value of N being correct.
If you want to safely rewrite the original code to utilize more cores, move Parallel.ForEach from the inner loop to the outer loop. If the number of outer loop iterations is high, the load balancer will be able to do its job properly (which it can't with your current workGroup count of 4 - that number of elements is simply too low).
I am trying to improve the speed of my program without changing the algorithm.
Currently I use this implementation of DFT:
public double[] dft(double[] data) {
int n = data.Length;
int m = n;// I use m = n / 2d;
float[] real = new float[n];
float[] imag = new float[n];
double[] result = new double[m];
float pi_div = (float)(2.0 * Math.PI / n);
for (int w = 0; w < m; w++) {
float a = w * pi_div;
for (int t = 0; t < n; t++) {
real[w] += (float)(data[t] * Math.Cos(a * t)); //thinking of threading this
imag[w] += (float)(data[t] * Math.Sin(a * t)); //and this
}
result[w] = (float)(Math.Sqrt(real[w] * real[w] + imag[w] * imag[w]) / n);
}
return result;
}
It is rather slow but it has one spot where I can see improvements can be made.
The internal parts of the functions are two separate tasks. The real and imaginary summations can be done separately but should always join to calculate the result.
Any ideas? I tried a few implementations I saw on the web but they all crashed and I have very little threading experience.
When you have a CPU bound algorithm that can be parallelized you can easily transform you single threaded implementation into a multi-threaded using the Parallel class.
In your case you have two nested loops but the number of iterations of the outer loop is much larger than the number of CPU cores you can execute on so it is only necessary to parallelize the outer loop to get all cores spinning:
public double[] ParallelDft(double[] data) {
int n = data.Length;
int m = n;// I use m = n / 2d;
float[] real = new float[n];
float[] imag = new float[n];
double[] result = new double[m];
float pi_div = (float)(2.0 * Math.PI / n);
Parallel.For(0, m,
w => {
float a = w * pi_div;
for (int t = 0; t < n; t++) {
real[w] += (float)(data[t] * Math.Cos(a * t)); //thinking of threading this
imag[w] += (float)(data[t] * Math.Sin(a * t)); //and this
}
result[w] = (float)(Math.Sqrt(real[w] * real[w] + imag[w] * imag[w]) / n);
}
);
return result;
}
I have taken your code and replaced the outer for loop with Parallel.For. On my computer with eight hyper-threaded cores I get a sevenfold increase in execution speed.
Another way to increase the execution speed is to employ the SIMD instruction set on the CPU. The System.Numerics.Vectors library and the Yeppp! library allows you to call SIMD instructions from managed code but it will require some work on your behalf to implement the algorithm using these instructions.
You should create a new Task for inner For, and each task will save result in a thread safe dictionary (ConcurrentDictionary).
I think following code be useful:
public ConcurrentDictionary<int, double> result = new ConcurrentDictionary<int, double>();
public void dft(double[] data)
{
int n = data.Length;
int m = n;// I use m = n / 2d;
float pi_div = (float)(2.0 * Math.PI / n);
for (int w = 0; w < m; w++)
{
var w1 = w;
Task.Factory.StartNew(() =>
{
float a = w1*pi_div;
float real = 0;
float imag=0;
for (int t = 0; t < n; t++)
{
real += (float)(data[t] * Math.Cos(a * t));
imag += (float)(data[t] * Math.Sin(a * t));
}
result.TryAdd(w1, (float) (Math.Sqrt(real*real + imag*imag)/n));
});
}
}
I'm trying to make a function that calculates factorial of a number in parallel, just for testing purposes.
Let's say I have 4 cores on my cpu, so I will split the "problem" in 4 chunks.
Saying that, I made this:
public class FactorialPTest
{
public static object _locker = new object();
public static long Factorial(int x)
{
long result = 1;
int right = 0;
int nr = x;
bool done = false;
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
new Thread(new ThreadStart(() =>
{
right = (step + nr / 4) > nr ? nr : (step + nr / 4);
long chunkResult = ChunkFactorial(step + 1, right);
lock (_locker)
{
result *= chunkResult;
if (right == nr)
done = true;
}
})).Start();
}
while(!done)
{
Thread.Sleep(10);
}
return result;
}
public static long ChunkFactorial(int left, int right)
{
//Console.WriteLine("left: {0} ; right: {1}", left, right);
Console.WriteLine("ChunkFactorial Thread ID :" + Thread.CurrentThread.ManagedThreadId);
if (left == right)
return left == 0 ? 1 : left;
else return right * ChunkFactorial(left, right - 1);
}
public static void main()
{
Console.WriteLine(Factorial(15));
}
}
Sometimes is working, sometimes it gives me intermediary results and sometimes a deadlock happens.
Why is this happening? Shouldn't Thread.Sleep(10) pause the main thread until I get the final result ?
I'd suggest looking into the Task Parallel Library. Among other things it will abstract away a lot of the low concerns relating to multi-threading.
You can represent each chunk of work with a task, add to a collection and then wait for them all to finish:
public static long Factorial(int x)
{
long result = 1;
int right = 0;
int nr = x;
bool done = false;
var tasks = new List<Task>();
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
tasks.Add(Task.Run(() =>
{
right = (step + nr / 4) > nr ? nr : (step + nr / 4);
long chunkResult = ChunkFactorial(step + 1, right);
lock (_locker)
{
result *= chunkResult;
}
}));
}
Task.WaitAll(tasks.ToArray());
return result;
}
In your original code the last chunk could conceivably complete it's work first and right would equal nr even though the other chunks hadn't been calculated. Also, right was getting shared between all the threads so this could also lead to some unpredictable results, i.e, all threads are trying to use this variable to hold different values at the same time.
Generally you should try to avoid sharing state between threads if possible. The code above could be improved by having each task return it's result and then use these results to calculate the final one:
public static long Factorial(int x)
{
int nr = x;
var tasks = new List<Task<long>>();
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
tasks.Add(Task.Run(() =>
{
int right = (step + nr / 4) > nr ? nr : (step + nr / 4);
return ChunkFactorial(step + 1, right);
}));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).Aggregate(((i, next) => i * next));
}
You could use one of the Parallel.For overloads with aggregation, it will handle parallelism, partitioning of the workload and aggregation of the results for you. For long type of result, you can only do factorial of 21, if I am not mistaken. It also makes sense to add checked {...} to catch overflows. The code could looks like:
public long CalculateFactorial(long value)
{
var result = 1L;
var syncRoot = new object();
checked
{
Parallel.For(
// always 1
1L,
// target value
value,
// if need more control, add { MaxDegreeOfParallelism = 4}
new ParallelOptions(),
// thread local result init
() => 1L,
// next value
(i, state, localState) => localState * i,
// aggregate local thread results
localState =>
{
lock (syncRoot)
{
result *= localState;
}
}
);
}
return result;
}
Hope this helps.
I need to calculate Pi - number via Monte-Carlo method using Task Parallel Library, but when my paralleled program is running, it calculates Pi - number much longer than it's unparallel analog.How two fix it? Paralleled calculating class and it's unparallel analog are below:
class CalcPiTPL
{
Object randLock = new object();
int n;
int N_0;
double aPi;
public StringBuilder Msg; // diagonstic message
double x, y;
Stopwatch stopWatch = new Stopwatch();
public void Init(int aN)
{
stopWatch.Start();
n = aN; // save total calculate-iterations amount
aPi = -1; // flag, if no any calculate-iteration has been completed
Msg = new StringBuilder("No any calculate-iteration has been completed");
}
public void Run()
{
if (n < 1)
{
Msg = new StringBuilder("Inbalid N-value");
return;
}
Random rnd = new Random(); // to create randomizer
Task[] tasks = new Task[4];
tasks[0] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[1] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[2] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[3] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
Task.WaitAll(tasks[0], tasks[1], tasks[2], tasks[3]);
aPi = 4.0 * ((double)N_0 / (double)n); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("RunTime " + elapsedTime);
}
public double Done()
{
if (aPi > 0)
{
Msg = new StringBuilder("Calculates has been completed successful");
return aPi; // return gotten value
}
else
{
return 0; // no result
}
}
public void PointGenerator(int n, Random rnd)
{
for (int i = 1; i <= n / 4; i++)
{
lock (randLock)
{
x = rnd.NextDouble(); // to generate coordinates
y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
//Interlocked.Increment(ref N_0);
N_0++; // coordinate in a circle! mark it by incrementing N_0
}
}
}
}
}
Unparallel analog:
class TCalcPi//unparallel calculating method
{
int N;
int N_0;
double aPi;
public StringBuilder Msg; // diagnostic message
double x, y;
Stopwatch stopWatch = new Stopwatch();
public void Init(int aN)
{
stopWatch.Start();
N = aN; // save total calculate-iterations amount
aPi = -1; // flag, if no any calculate-iteration has been completed
Msg = new StringBuilder("No any calculate-iteration has been completed");
}
public void Run()
{
if (N < 1)
{
Msg = new StringBuilder("Invalid N - value");
return;
}
int i;
Random rnd = new Random(); // to create randomizer
for (i = 1; i <= N; i++)
{
x = rnd.NextDouble(); // to generate coordinates
y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
N_0++; // coordinate in a circle! mark it by incrementing N_0
}
}
aPi = 4.0 * ((double)N_0 / (double)N); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("RunTime " + elapsedTime);
}
public double Done()
{
if (aPi > 0)
{
Msg = new StringBuilder("Calculates has been completed successful");
return aPi; // return gotten value
}
else
{
return 0; // no result
}
}
}
You have written the PointGenerator in a way in which it can barely benefit from being executed in parallel.
the lock means it will have basically single-threaded performance with additional threading overhead
a global state N_0 means you will have to synchronize access. Granted, since it's just an int you can use the Interlocked class for efficiently incrementing it.
What I would is to let each PointGenerator have a different Random object and a different counter. Then there won't be any shared mutable state which could cause problems. Be careful though, the default constructor of Random uses the tick count of the system. Creating several objects might result in random generators with the same seed.
Once all PointGenerator finish you combine the results.
This would be very similar to what some of the TPL overloads of Parallel.For and Parallel.ForEach do.
I know this post is old but it still shows up when searching for how to compute pi in parallel in C#. I have modified this to use the systems thread count for the workers. Also the lock is not needed if we use a return type for the workers, put some of the other variables in the worker function and finally let everything be put together by yet another task. This uses long for a larger count of iterations. The instances of Random are created with the thread id as the seed, which i hope makes them give different sequences of random numbers. Removed the Init-Method and put initialization in the Run-Method instead. There are two ways of using this now, blocking and non-blocking. But first here is the class:
public class CalcPiTPL
{
private long n;
private double pi;
private Stopwatch stopWatch = new Stopwatch();
private Task<int>[]? tasks = null;
private Task? taskOrchestrator = null;
private ManualResetEvent rst = new ManualResetEvent(false);
private bool isDone = false;
public string elapsedTime = string.Empty;
public double Pi { get { return pi; } }
public void Run(long n)
{
if (n < 1 || taskOrchestrator!=null) return;
isDone = false;
rst.Reset();
stopWatch.Start();
this.n = n; // save total calculate-iterations amount
pi = -1; // flag, if no any calculate-iteration has been completed
tasks = new Task<int>[Environment.ProcessorCount];
for(int i = 0; i < Environment.ProcessorCount; i++)
{
tasks[i] = Task.Factory.StartNew(() => PointGenerator(n));
}
taskOrchestrator = Task.Factory.StartNew(() => Orchestrator());
}
private void Orchestrator()
{
Task.WaitAll(tasks);
long N_0 = 0;
foreach (var task in tasks)
{
N_0 += task.GetAwaiter().GetResult();
}
pi = 4.0 * ((double)N_0 / (double)n); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10);
tasks = null;
taskOrchestrator = null;
isDone = true;
rst.Set();
}
public double Wait()
{
rst.WaitOne();
return pi;
}
public bool IsDone()
{
return isDone;
}
private int PointGenerator(long n)
{
int N_0 = 0;
Random rnd = new Random(Thread.CurrentThread.ManagedThreadId);
for (int i = 1; i <= n / Environment.ProcessorCount; i++)
{
double x = rnd.NextDouble(); // to generate coordinates
double y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
N_0++;
}
}
return N_0;
}
}
Blocking call:
CalcPiTPL pi = new CalcPiTPL();
pi.Run(1000000000);
Console.WriteLine(pi.Wait());
non-blocking call:
CalcPiTPL pi = new CalcPiTPL();
pi.Run(100000000);
while (pi.IsDone()==false)
{
Thread.Sleep(100);
// Do something else
}
Console.WriteLine(pi.Pi);
Adding an event would probably be nice, if someone wants to use this in a GUI application. Maybe i will do that later.
Feel free to correct, if i messed something up.
When your whole parallel part is inside a lock scope nothing is actually parallel. Only a single thread can be inside a lock scope in any given moment.
You can simply use different Random instances instead of a single one.