To accelerate calculating of pi using TPL - c#

I need to calculate Pi - number via Monte-Carlo method using Task Parallel Library, but when my paralleled program is running, it calculates Pi - number much longer than it's unparallel analog.How two fix it? Paralleled calculating class and it's unparallel analog are below:
class CalcPiTPL
{
Object randLock = new object();
int n;
int N_0;
double aPi;
public StringBuilder Msg; // diagonstic message
double x, y;
Stopwatch stopWatch = new Stopwatch();
public void Init(int aN)
{
stopWatch.Start();
n = aN; // save total calculate-iterations amount
aPi = -1; // flag, if no any calculate-iteration has been completed
Msg = new StringBuilder("No any calculate-iteration has been completed");
}
public void Run()
{
if (n < 1)
{
Msg = new StringBuilder("Inbalid N-value");
return;
}
Random rnd = new Random(); // to create randomizer
Task[] tasks = new Task[4];
tasks[0] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[1] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[2] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
tasks[3] = Task.Factory.StartNew(() => PointGenerator(n, rnd));
Task.WaitAll(tasks[0], tasks[1], tasks[2], tasks[3]);
aPi = 4.0 * ((double)N_0 / (double)n); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("RunTime " + elapsedTime);
}
public double Done()
{
if (aPi > 0)
{
Msg = new StringBuilder("Calculates has been completed successful");
return aPi; // return gotten value
}
else
{
return 0; // no result
}
}
public void PointGenerator(int n, Random rnd)
{
for (int i = 1; i <= n / 4; i++)
{
lock (randLock)
{
x = rnd.NextDouble(); // to generate coordinates
y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
//Interlocked.Increment(ref N_0);
N_0++; // coordinate in a circle! mark it by incrementing N_0
}
}
}
}
}
Unparallel analog:
class TCalcPi//unparallel calculating method
{
int N;
int N_0;
double aPi;
public StringBuilder Msg; // diagnostic message
double x, y;
Stopwatch stopWatch = new Stopwatch();
public void Init(int aN)
{
stopWatch.Start();
N = aN; // save total calculate-iterations amount
aPi = -1; // flag, if no any calculate-iteration has been completed
Msg = new StringBuilder("No any calculate-iteration has been completed");
}
public void Run()
{
if (N < 1)
{
Msg = new StringBuilder("Invalid N - value");
return;
}
int i;
Random rnd = new Random(); // to create randomizer
for (i = 1; i <= N; i++)
{
x = rnd.NextDouble(); // to generate coordinates
y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
N_0++; // coordinate in a circle! mark it by incrementing N_0
}
}
aPi = 4.0 * ((double)N_0 / (double)N); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("RunTime " + elapsedTime);
}
public double Done()
{
if (aPi > 0)
{
Msg = new StringBuilder("Calculates has been completed successful");
return aPi; // return gotten value
}
else
{
return 0; // no result
}
}
}

You have written the PointGenerator in a way in which it can barely benefit from being executed in parallel.
the lock means it will have basically single-threaded performance with additional threading overhead
a global state N_0 means you will have to synchronize access. Granted, since it's just an int you can use the Interlocked class for efficiently incrementing it.
What I would is to let each PointGenerator have a different Random object and a different counter. Then there won't be any shared mutable state which could cause problems. Be careful though, the default constructor of Random uses the tick count of the system. Creating several objects might result in random generators with the same seed.
Once all PointGenerator finish you combine the results.
This would be very similar to what some of the TPL overloads of Parallel.For and Parallel.ForEach do.

I know this post is old but it still shows up when searching for how to compute pi in parallel in C#. I have modified this to use the systems thread count for the workers. Also the lock is not needed if we use a return type for the workers, put some of the other variables in the worker function and finally let everything be put together by yet another task. This uses long for a larger count of iterations. The instances of Random are created with the thread id as the seed, which i hope makes them give different sequences of random numbers. Removed the Init-Method and put initialization in the Run-Method instead. There are two ways of using this now, blocking and non-blocking. But first here is the class:
public class CalcPiTPL
{
private long n;
private double pi;
private Stopwatch stopWatch = new Stopwatch();
private Task<int>[]? tasks = null;
private Task? taskOrchestrator = null;
private ManualResetEvent rst = new ManualResetEvent(false);
private bool isDone = false;
public string elapsedTime = string.Empty;
public double Pi { get { return pi; } }
public void Run(long n)
{
if (n < 1 || taskOrchestrator!=null) return;
isDone = false;
rst.Reset();
stopWatch.Start();
this.n = n; // save total calculate-iterations amount
pi = -1; // flag, if no any calculate-iteration has been completed
tasks = new Task<int>[Environment.ProcessorCount];
for(int i = 0; i < Environment.ProcessorCount; i++)
{
tasks[i] = Task.Factory.StartNew(() => PointGenerator(n));
}
taskOrchestrator = Task.Factory.StartNew(() => Orchestrator());
}
private void Orchestrator()
{
Task.WaitAll(tasks);
long N_0 = 0;
foreach (var task in tasks)
{
N_0 += task.GetAwaiter().GetResult();
}
pi = 4.0 * ((double)N_0 / (double)n); // to calculate approximate Pi - value
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10);
tasks = null;
taskOrchestrator = null;
isDone = true;
rst.Set();
}
public double Wait()
{
rst.WaitOne();
return pi;
}
public bool IsDone()
{
return isDone;
}
private int PointGenerator(long n)
{
int N_0 = 0;
Random rnd = new Random(Thread.CurrentThread.ManagedThreadId);
for (int i = 1; i <= n / Environment.ProcessorCount; i++)
{
double x = rnd.NextDouble(); // to generate coordinates
double y = rnd.NextDouble(); //
if (((x - 0.5) * (x - 0.5) + (y - 0.5) * (y - 0.5)) < 0.25)
{
N_0++;
}
}
return N_0;
}
}
Blocking call:
CalcPiTPL pi = new CalcPiTPL();
pi.Run(1000000000);
Console.WriteLine(pi.Wait());
non-blocking call:
CalcPiTPL pi = new CalcPiTPL();
pi.Run(100000000);
while (pi.IsDone()==false)
{
Thread.Sleep(100);
// Do something else
}
Console.WriteLine(pi.Pi);
Adding an event would probably be nice, if someone wants to use this in a GUI application. Maybe i will do that later.
Feel free to correct, if i messed something up.

When your whole parallel part is inside a lock scope nothing is actually parallel. Only a single thread can be inside a lock scope in any given moment.
You can simply use different Random instances instead of a single one.

Related

How can a progress bar be added to this parallel code

How can a progress bar be added to the third loop shown below?
Experimental evidence has shown that of the following three loops the third is fastest. The best performance is when the thread count is the same as the logical processors of the CPU. I think this is due to reduced time spent allocating and releasing thread resources.
This is code to generate a noise map. Sometimes the processing time is long enough that a progress bar is needed.
for (int j = 0; j < data.Length; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
Parallel.For(0, data.Length, (i) => {
var x = (location.X + (i % width));
var y = (location.Y + (i / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[i] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
});
Parallel.For(0, threads, (i) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
}
);
A progress bar that has a limited update rate to a few times a second so as to not waist time drawing a progress bar too often would be the best.
Adding IProgress I have arrived here and this almost works. The problem is that the progress bar updates after the parallel.for has completely finished.
private async Task<int> FillDataParallelAsync(int threads, IProgress<int> progress)
{
int precent = 0;
/// parallel loop - easy and fast.
Parallel.For(0, threads, (i) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
if(j%(data.Length / 100) ==0)
{
if (progress != null)
{
progress.Report(precent);
}
Interlocked.Increment(ref precent);
}
}
}
);
return 0;
}
After working on this for too long it looks like this now.
private Boolean FillDataParallel3D(int threads, CancellationToken token, IProgress<int> progress)
{
int precent = 0;
Vector3 imageCenter = location;
imageCenter.X -= width / 2;
imageCenter.Y -= height / 2;
ParallelOptions options = new ParallelOptions { CancellationToken = token };
/// parallel loop - easy and fast.
try
{
ParallelLoopResult result =
Parallel.For(0, threads, options, (i, loopState) =>
{
int from = i * data.Length / threads;
int to = from + data.Length / threads;
if (i == threads - 1) to = data.Length - 1;
for (int j = from; j < to; j++)
{
if (loopState.ShouldExitCurrentIteration) break;
Vector3 p = imageCenter;
p.X += (j % width);
p.Y += (j / width);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
if (j % (data.Length / 100) == 0)
{
try { if (progress != null) progress.Report(precent); }
catch { }
Interlocked.Increment(ref precent);
}
}
}
);
return result.IsCompleted;
}
catch { }
return false;
}
The progress is incremented in part from each thread up to a total of 100 times. It still has some latency in updating the progress bar but it seems to be unavoidable. For example if the progress bar increments 100 times in less than the time to draw 100 updates the progress seems to que up and continues to progress after the method returns. Suppressing the display of progress for a second after calling the method works well enough. The progress bar is only really useful when the method takes so long that you wonder if anything is happening.
Full project at https://github.com/David-Marsh/Designer
you may want to take a look at the IProgress at MSDN. IProgress was introduced as a standard way for displaying progress. This interface exposes a Report(T) method, which the async task calls to report progress. You expose this interface in the signature of the async method, and the caller must provide an object that implements this interface.
EDIT:
What the threads should report depends on how fine-grained you need your reporting. The easiest approach would be to report progress after each iteration. I'm intentionally writing iteration because the Parallel.For method does not necessarily execute each iteration on a separate thread.
Your progress, probably in percent, is something that is shared by all threads. So calculating current progress in percent and calling the Report method will most likely require locking. Be aware that this will have some performance impacts.
As for calculating the current progress, you know how many iterations you have. You can calculate how much one iteration is compared to the overall work. At the beginning or at the end of each iteration, you simply add the difference to the overall progress.
Here is an example that might helps you solve your problem:
public void ParallelForProgressExample(IProgress<int> progress = null)
{
int percent = 0;
...
var result = Parallel.For(fromInclusive, toExclusive, (i, state) =>
{
// do your work
lock (lockObject)
{
// caluclate percentage
// ...
// update progress
progress?.Report(percent);
}
});
}
As progress, you can either use the System.Progress class or implement the IProgress interface yourself.
This is not related to the main question (the progress bar). I just want to note that the .NET Framework contains a Partitioner class, so there is no need to partition the data manually:
Parallel.ForEach(Partitioner.Create(0, data.Length), range =>
{
for (int j = range.Item1; j < range.Item2; j++)
{
var x = (location.X + (j % width));
var y = (location.Y + (j / width));
Vector3 p = new Vector3(x, y, frame);
p *= zoom;
float val = noise.GetNoise(p);
data[j] += val;
min = Math.Min(min, val);
max = Math.Max(max, val);
}
});

How do I avoid stackoverflowexception within a finite loop (C#)

I'm trying to write a code to find prime numbers within a given range. Unfortunately I'm running into some problems with too many repetitions that'll give me a stackoverflowexception after prime nr: 30000. I have tried using a 'foreach' and also not using a list, (doing each number as it comes) but nothing seems to handle the problem in hand.
How can I make this program run forever without causing a stackoverflow?
class Program
{
static void Main(string[] args)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
List<double> Primes = new List<double>();
const double Start = 0;
const double End = 100000;
double counter = 0;
int lastInt = 0;
for (int i = 0; i < End; i++)
Primes.Add(i);
for (int i =0;i< Primes.Count;i++)
{
lastInt = (int)Primes[i] - RoundOff((int)Primes[i]);
Primes[i] = (int)CheckForPrime(Primes[i], Math.Round(Primes[i] / 2));
if (Primes[i] != 0)
{
Console.Write(", {0}", Primes[i]);
counter++;
}
}
stopwatch.Stop();
Console.WriteLine("\n\nNumber of prime-numbers between {0} and {1} is: {2}, time it took to calc this: {3} (millisecounds).\n\n" +
" The End\n", Start, End, counter, stopwatch.ElapsedMilliseconds);
}
public static double CheckForPrime(double Prim, double Devider)
{
if (Prim / Devider == Math.Round(Prim / Devider))
return 0;
else if (Devider > 2)
return CheckForPrime(Prim, Devider - 1);
else
return Prim;
}
public static int RoundOff(int i)
{
return ((int)Math.Floor(i / 10.0)) * 10;
}
}

Parallel factorial with multiple threads

I'm trying to make a function that calculates factorial of a number in parallel, just for testing purposes.
Let's say I have 4 cores on my cpu, so I will split the "problem" in 4 chunks.
Saying that, I made this:
public class FactorialPTest
{
public static object _locker = new object();
public static long Factorial(int x)
{
long result = 1;
int right = 0;
int nr = x;
bool done = false;
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
new Thread(new ThreadStart(() =>
{
right = (step + nr / 4) > nr ? nr : (step + nr / 4);
long chunkResult = ChunkFactorial(step + 1, right);
lock (_locker)
{
result *= chunkResult;
if (right == nr)
done = true;
}
})).Start();
}
while(!done)
{
Thread.Sleep(10);
}
return result;
}
public static long ChunkFactorial(int left, int right)
{
//Console.WriteLine("left: {0} ; right: {1}", left, right);
Console.WriteLine("ChunkFactorial Thread ID :" + Thread.CurrentThread.ManagedThreadId);
if (left == right)
return left == 0 ? 1 : left;
else return right * ChunkFactorial(left, right - 1);
}
public static void main()
{
Console.WriteLine(Factorial(15));
}
}
Sometimes is working, sometimes it gives me intermediary results and sometimes a deadlock happens.
Why is this happening? Shouldn't Thread.Sleep(10) pause the main thread until I get the final result ?
I'd suggest looking into the Task Parallel Library. Among other things it will abstract away a lot of the low concerns relating to multi-threading.
You can represent each chunk of work with a task, add to a collection and then wait for them all to finish:
public static long Factorial(int x)
{
long result = 1;
int right = 0;
int nr = x;
bool done = false;
var tasks = new List<Task>();
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
tasks.Add(Task.Run(() =>
{
right = (step + nr / 4) > nr ? nr : (step + nr / 4);
long chunkResult = ChunkFactorial(step + 1, right);
lock (_locker)
{
result *= chunkResult;
}
}));
}
Task.WaitAll(tasks.ToArray());
return result;
}
In your original code the last chunk could conceivably complete it's work first and right would equal nr even though the other chunks hadn't been calculated. Also, right was getting shared between all the threads so this could also lead to some unpredictable results, i.e, all threads are trying to use this variable to hold different values at the same time.
Generally you should try to avoid sharing state between threads if possible. The code above could be improved by having each task return it's result and then use these results to calculate the final one:
public static long Factorial(int x)
{
int nr = x;
var tasks = new List<Task<long>>();
for (int i = 0; i < nr; i += (nr / 4))
{
int step = i;
tasks.Add(Task.Run(() =>
{
int right = (step + nr / 4) > nr ? nr : (step + nr / 4);
return ChunkFactorial(step + 1, right);
}));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).Aggregate(((i, next) => i * next));
}
You could use one of the Parallel.For overloads with aggregation, it will handle parallelism, partitioning of the workload and aggregation of the results for you. For long type of result, you can only do factorial of 21, if I am not mistaken. It also makes sense to add checked {...} to catch overflows. The code could looks like:
public long CalculateFactorial(long value)
{
var result = 1L;
var syncRoot = new object();
checked
{
Parallel.For(
// always 1
1L,
// target value
value,
// if need more control, add { MaxDegreeOfParallelism = 4}
new ParallelOptions(),
// thread local result init
() => 1L,
// next value
(i, state, localState) => localState * i,
// aggregate local thread results
localState =>
{
lock (syncRoot)
{
result *= localState;
}
}
);
}
return result;
}
Hope this helps.

PLINQ (C#/.Net 4.5.1) vs Stream (JDK/Java 8) Performance

I am trying to compare performance between parallel streams in Java 8 and PLINQ (C#/.Net 4.5.1).
Here is the result I get on my machine ( System Manufacturer Dell Inc. System Model Precision M4700 Processor Intel(R) Core(TM) i7-3740QM CPU # 2.70GHz, 2701 Mhz, 4 Core(s), 8 Logical Processor(s) Installed Physical Memory (RAM) 16.0 GB OS Name Microsoft Windows 7 Enterprise Version 6.1.7601 Service Pack 1 Build 7601)
C# .Net 4.5.1 (X64-release)
Serial:
470.7784, 491.4226, 502.4643, 481.7507, 464.1156, 463.0088, 546.149, 481.2942, 502.414, 483.1166
Average: 490.6373
Parallel:
158.6935, 133.4113, 217.4304, 182.3404, 184.188, 128.5767, 160.352, 277.2829, 127.6818, 213.6832
Average: 180.5496
Java 8 (X64)
Serial:
471.911822, 333.843924, 324.914299, 325.215631, 325.208402, 324.872828, 324.888046, 325.53066, 325.765791, 325.935861
Average:326.241715
Parallel:
212.09323, 73.969783, 68.015431, 66.246628, 66.15912, 66.185373, 80.120837, 75.813539, 70.085948, 66.360769
Average:70.3286
It looks like PLINQ does not scale across the CPU cores. I am wondering if I miss something.
Here is the code for C#:
class Program
{
static void Main(string[] args)
{
var NUMBER_OF_RUNS = 10;
var size = 10000000;
var vals = new double[size];
var rnd = new Random();
for (int i = 0; i < size; i++)
{
vals[i] = rnd.NextDouble();
}
var avg = 0.0;
Console.WriteLine("Serial:");
for (int i = 0; i < NUMBER_OF_RUNS; i++)
{
var watch = Stopwatch.StartNew();
var res = vals.Select(v => Math.Sin(v)).ToArray();
var elapsed = watch.Elapsed.TotalMilliseconds;
Console.Write(elapsed + ", ");
if (i > 0)
avg += elapsed;
}
Console.Write("\nAverage: " + (avg / (NUMBER_OF_RUNS - 1)));
avg = 0.0;
Console.WriteLine("\n\nParallel:");
for (int i = 0; i < NUMBER_OF_RUNS; i++)
{
var watch = Stopwatch.StartNew();
var res = vals.AsParallel().Select(v => Math.Sin(v)).ToArray();
var elapsed = watch.Elapsed.TotalMilliseconds;
Console.Write(elapsed + ", ");
if (i > 0)
avg += elapsed;
}
Console.Write("\nAverage: " + (avg / (NUMBER_OF_RUNS - 1)));
}
}
Here is the code for Java:
import java.util.Arrays;
import java.util.Random;
import java.util.stream.DoubleStream;
public class Main {
private static final Random rand = new Random();
private static final int MIN = 1;
private static final int MAX = 140;
private static final int POPULATION_SIZE = 10_000_000;
public static final int NUMBER_OF_RUNS = 10;
public static void main(String[] args) throws InterruptedException {
Random rnd = new Random();
double[] vals1 = DoubleStream.generate(rnd::nextDouble).limit(POPULATION_SIZE).toArray();
double avg = 0.0;
System.out.println("Serial:");
for (int i = 0; i < NUMBER_OF_RUNS; i++)
{
long start = System.nanoTime();
double[] res = Arrays.stream(vals1).map(Math::sin).toArray();
double duration = (System.nanoTime() - start) / 1_000_000.0;
System.out.print(duration + ", " );
if (i > 0)
avg += duration;
}
System.out.println("\nAverage:" + (avg / (NUMBER_OF_RUNS - 1)));
avg = 0.0;
System.out.println("\n\nParallel:");
for (int i = 0; i < NUMBER_OF_RUNS; i++)
{
long start = System.nanoTime();
double[] res = Arrays.stream(vals1).parallel().map(Math::sin).toArray();
double duration = (System.nanoTime() - start) / 1_000_000.0;
System.out.print(duration + ", " );
if (i > 0)
avg += duration;
}
System.out.println("\nAverage:" + (avg / (NUMBER_OF_RUNS - 1)));
}
}
Both runtimes make a decision about how many threads to use in order to complete the parallel operation. That is a non-trivial task that can take many factors into account, including the degree to which the task is CPU bound, the estimated time to complete the task, etc.
Each runtime is different decisions about how many threads to use to resolve the request. Neither decision is obviously right or wrong in terms of system-wide scheduling, but the Java strategy performs the benchmark better (and leaves fewer CPU resources available for other tasks on the system).

Precise alternative to Thread.Sleep

I have a method Limit() which counts a bandwidth passed thought some channel in certain time and limits by using Thread.Sleep() it (if bandwidth limit is reached).
Method itself produces proper ( in my opinion results ) but Thread.Sleep doesn't ( due to multithreaded CPU usage ) because i have proper "millisecondsToWait" but speed check afterwards is far from limitation i've passed.
Is there a way to make limitation more precise ?
Limiter Class
private readonly int m_maxSpeedInKbps;
public Limiter(int maxSpeedInKbps)
{
m_maxSpeedInKbps = maxSpeedInKbps;
}
public int Limit(DateTime startOfCycleDateTime, long writtenInBytes)
{
if (m_maxSpeedInKbps > 0)
{
double totalMilliseconds = DateTime.Now.Subtract(startOfCycleDateTime).TotalMilliseconds;
int currentSpeedInKbps = (int)((writtenInBytes / totalMilliseconds));
if (currentSpeedInKbps - m_maxSpeedInKbps > 0)
{
double delta = (double)currentSpeedInKbps / m_maxSpeedInKbps;
int millisecondsToWait = (int)((totalMilliseconds * delta) - totalMilliseconds);
if (millisecondsToWait > 0)
{
Thread.Sleep(millisecondsToWait);
return millisecondsToWait;
}
}
}
return 0;
}
Test Class which always fails in large delta
[TestMethod]
public void ATest()
{
List<File> files = new List<File>();
for (int i = 0; i < 1; i++)
{
files.Add(new File(i + 1, 100));
}
const int maxSpeedInKbps = 1024; // 1MBps
Limiter limiter = new Limiter(maxSpeedInKbps);
DateTime startDateTime = DateTime.Now;
Parallel.ForEach(files, new ParallelOptions {MaxDegreeOfParallelism = 5}, file =>
{
DateTime currentFileStartTime = DateTime.Now;
Thread.Sleep(5);
limiter.Limit(currentFileStartTime, file.Blocks * Block.Size);
});
long roundOfWriteInKB = (files.Sum(i => i.Blocks.Count) * Block.Size) / 1024;
int currentSpeedInKbps = (int) (roundOfWriteInKB/DateTime.Now.Subtract(startDateTime).TotalMilliseconds*1000);
Assert.AreEqual(maxSpeedInKbps, currentSpeedInKbps, string.Format("maxSpeedInKbps {0} currentSpeedInKbps {1}", maxSpeedInKbps, currentSpeedInKbps));
}
I used to use Thread.Sleep a lot until I discovered waithandles. Using waithandles you can suspend threads, which will come alive again when the waithandle is triggered from elsewhere, or when a time threshold is reached. Perhaps it's possible to re-engineer your limit methodology to use waithandles in some way, because in a lot of situations they are indeed much more precise than Thread.Sleep?
You can do it fairly accurately using a busy wait, but I wouldn't recommend it. You should use one of the multimedia timers to wait instead.
However, this method will wait fairly accurately:
void accurateWait(int millisecs)
{
var sw = Stopwatch.StartNew();
if (millisecs >= 100)
Thread.Sleep(millisecs - 50);
while (sw.ElapsedMilliseconds < millisecs)
;
}
But it is a busy wait and will consume CPU cycles terribly. Also it could be affected by garbage collections or task rescheduling.
Here's the test program:
using System;
using System.Diagnostics;
using System.Collections.Generic;
using System.Threading;
namespace Demo
{
class Program
{
void run()
{
for (int i = 1; i < 10; ++i)
test(i);
for (int i = 10; i < 100; i += 5)
test(i);
for (int i = 100; i < 200; i += 10)
test(i);
for (int i = 200; i < 500; i += 20)
test(i);
}
void test(int millisecs)
{
var sw = Stopwatch.StartNew();
accurateWait(millisecs);
Console.WriteLine("Requested wait = " + millisecs + ", actual wait = " + sw.ElapsedMilliseconds);
}
void accurateWait(int millisecs)
{
var sw = Stopwatch.StartNew();
if (millisecs >= 100)
Thread.Sleep(millisecs - 50);
while (sw.ElapsedMilliseconds < millisecs)
;
}
static void Main()
{
new Program().run();
}
}
}

Categories

Resources