How do I calculate how many hashes I generate per second? - c#

I have a function which generates hashes from a string:
string GenerateHash(string plainText);
I generate as many hashes as possible with 4 threads.
How do I calculate how many hashes (or megahashes) I generate per second?

Your problem breaks down nicely into 3 separate tasks
Sharing a single count variable across threads
Benchmarking thread completion time
Calculating hashes p/sec
Sharing a single count variable across threads
public static class GlobalCounter
{
public static int Value { get; private set; }
public static void Increment()
{
Value = GetNextValue(Value);
}
private static int GetNextValue(int curValue)
{
return Interlocked.Increment(ref curValue);
}
public static void Reset()
{
Value = 0;
}
}
Before you spin off the threads call GlobalCounter.Reset and then in each thread (after each successful hash) you would call GlobalCounter.Increment - using Interlocked.X performs atomic operations of Value in a thread-safe manner, it's also much faster than lock.
Benchmarking thread completion time
var sw = Stopwatch.StartNew();
Parallel.ForEach(someCollection, someValue =>
{
// generate hash
GlobalCounter.Increment();
});
sw.Stop();
Parallel.ForEach will block until all threads have finished
Calculating hashes per second
...
sw.Stop();
var hashesPerSecond = GlobalCounter.Value / sw.Elapsed.Seconds;

Use a counter variable to count the number of hashes generated and divide it accordingly all x seconds using a timer.
Its probably most performant if you have a counter for each thread and the timer just reads all the counters. Then you do not have as much locking.

You can use the Stopwatch class to more accurately measure time passing. You can start your stopwatch, generate your hashes, then stop the stopwatch. Then you should have a count of the hashes and a total time it took to generate, which you need to calculate the per-second rate.

Related

calling Interlocked after Semaphore WaitOne

Came across the following code which blocks on a Semaphore when GenerateLabel is called more than 4 times concurrently. After the WaitOne a member mCurrentScanner is used to get access to a scanner. The question is if the Interlocked functions are needed after the WaitOne? I'd say no as the thread starts fresh when the WaitHandle is released, but not 100% sure.
mConcurrentLabels = new Semaphore(4, 4);
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
int current = 0;
Interlocked.Exchange(ref current, mCurrentScanner);
(scanner, dir) = ScanMappings[current];
Interlocked.Increment(ref mCurrentScanner);
mCurrentScanner %= 4;
DoLongRunningTask();
mConcurrentLabels.Release();
}
Like you said; The semaphore is used to limit the concurrent threads. But the body is still executed concurrently. So locks/interlocked is required.
The bigger problem is: Using Interlocked.Exchange(ref current, mCurrentScanner); to read the value safely and using the Interlocked.Increment(ref mCurrentScanner);.
It might be possible to concurrent read the same value Exchange() and increment it twice. So you'll select one value twice and skip the next one.
I also advice to use try/finallies when using Semaphores.
mConcurrentLabels = new Semaphore(4, 4);
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
try
{
int current = Interlocked.Increment(ref mCurrentScanner);
(scanner, dir) = ScanMappings[current];
// mCurrentScanner %= 4; <------ ?
DoLongRunningTask();
}
finally
{
mConcurrentLabels.Release();
}
}
But if you need to mod the mCurrentScanner, I wouldn't use Interlocked.
mConcurrentLabels = new Semaphore(4, 4);
object mSyncRoot = new object();
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
try
{
int current;
lock(mSyncRoot)
{
current = mCurrentScanner++;
mCurrentScanner %= 4;
}
(scanner, dir) = ScanMappings[current];
// mCurrentScanner %= 4; <------ ?
DoLongRunningTask();
}
finally
{
mConcurrentLabels.Release();
}
}
It seems that the purpose of the semaphore is to protect the long running task and not to protect access to the private variables.
This is is useful from a resource management perspective. For example to prevent too many concurrent long running tasks from trashing a shared resource like a database.
The interlocked statements are needed to protect the private variables because the semaphore allows this code to run up to four times concurrently on different threads.
It is good practice to put the main part of this code in a try {} finally{} block to guarantee mConcurrentLabels.Release() is called exactly one time for every time mConcurrentLabels.WaitOne() is called.

How to avoid running out of RAM, during a concurrent data proccessing?

I have an issue with data concurrent processing. My PC is running out of RAM quickly. Any advices on how to fix my concurrent implementation?
Common class:
public class CalculationResult
{
public int Count { get; set; }
public decimal[] RunningTotals { get; set; }
public CalculationResult(decimal[] profits)
{
this.Count = 1;
this.RunningTotals = new decimal[12];
profits.CopyTo(this.RunningTotals, 0);
}
public void Update(decimal[] newData)
{
this.Count++;
// summ arrays
for (int i = 0; i < 12; i++)
this.RunningTotals[i] = this.RunningTotals[i] + newData[i];
}
public void Update(CalculationResult otherResult)
{
this.Count += otherResult.Count;
// summ arrays
for (int i = 0; i < 12; i++)
this.RunningTotals[i] = this.RunningTotals[i] + otherResult.RunningTotals[i];
}
}
Single-core implementation of the code is following:
Dictionary<string, CalculationResult> combinations = new Dictionary<string, CalculationResult>();
foreach (var i in itterations)
{
// do the processing
// ..
string combination = "1,2,3,4,42345,52,523"; // this is determined during the processing
if (combinations.ContainsKey(combination))
combinations[combination].Update(newData);
else
combinations.Add(combination, new CalculationResult(newData));
}
Multi-core implementation:
ConcurrentBag<Dictionary<string, CalculationResult>> results = new ConcurrentBag<Dictionary<string, CalculationResult>>();
Parallel.ForEach(itterations, (i, state) =>
{
Dictionary<string, CalculationResult> combinations = new Dictionary<string, CalculationResult>();
// do the processing
// ..
// add combination to combinations -> same logic as in single core implementation
results.Add(combinations);
});
Dictionary<string, CalculationResult> combinationsReal = new Dictionary<string, CalculationResult>();
foreach (var item in results)
{
foreach (var pair in item)
{
if (combinationsReal.ContainsKey(pair.Key))
combinationsReal[pair.Key].Update(pair.Value);
else
combinationsReal.Add(pair.Key, pair.Value);
}
}
The issue I am having is that almost each combinations dictionary ends up with 930k records in it, which is on average consumes 400 [MB] RAM memory.
Now, in single core implementation there is only one such dictionary. All checks are performed against one dictionary. But this is slow approach and I want to use multi-core optimizations.
In multi-core implementation there is a ConcurrentBag instance created which holds all combinations dictionaries. As soon as the multi-thread job is finished - all dictionaries are aggregated into one. This approach works well for small amount of concurrent iterations. For example, for 4 iterations my RAM usage was ~ 1.5 [GB]. The issue arises, when I set the full amount of parallel iterations, which is 200! No amount of PC RAM is enough to hold all dictionaries, with million records each!
I was thinking about using ConcurrentDictioanary, until I found out that the "TryAdd" method does not guarantee integrity of added data in my situation, as I also need to run updates on running totals.
The only real multi-threaded option is, instead of adding all combinations to dictionary - is to save them to some DB. Data aggregation will then be a matter of 1 SQL select statement with a group by clause... but I don't like the idea of creating a temporary table and running DB instance just for that..
Is there a work around on how to processes data concurrently and not run out of RAM?
EDIT:
Maybe the real question should have been - how to make updating of RunningTotals thread-safe when using ConcurrentDictionary? I have just ran across this thread, with a similar issue with ConcurrentDictionary, but my situation seems to be more complicated as I have an array that needs to be updated. I am still investigating this matter.
EDIT2: Here is a working solution with ConcurrentDictionary. All I needed to do is to add a lock for the dictionary key.
ConcurrentDictionary<string, CalculationResult> combinations = new ConcurrentDictionary<string, CalculationResult>();
Parallel.ForEach(itterations, (i, state) =>
{
// do the processing
// ..
string combination = "1,2,3,4,42345,52,523"; // this is determined during the processing
if (combinations.ContainsKey(combination)) {
lock(combinations[combination])
combinations[combination].Update(newData);
}
else
combinations.TryAdd(combination, new CalculationResult(newData));
});
Single-thread code execution time is 1m 48s, whereas this solution execution time is 1m 7s for 4 iterations (37% performance increase). I am still wondering if SQL approach will be any faster, with millions of records? I will test it out possibly tomorrow and update.
Edit 3: For those of you wondering what's wrong with ConcurrentDictionary updates on a value - run this code with and without the lock.
public class Result
{
public int Count { get; set; }
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Start");
List<int> keys = new List<int>();
for (int i = 0; i < 100; i++)
keys.Add(i);
ConcurrentDictionary<int, Result> dict = new ConcurrentDictionary<int, Result>();
Parallel.For(0, 8, i =>
{
foreach(var key in keys)
{
if (dict.ContainsKey(key))
{
//lock (dict[key]) // uncomment this
dict[key].Count++;
}
else
dict.TryAdd(key, new Result());
}
});
// any output here is incorrect behavior. best result = no lines
foreach (var item in dict)
if (item.Value.Count != 7) { Console.WriteLine($"{item.Key}; {item.Value.Count}"); }
Console.WriteLine($"Finish");
Console.ReadKey();
}
}
Edit 4: After trials and errors I couldn't optimize SQL approach. This turned out to be the worst idea :) I have used an SQL Lite database. In-memory and in-file. With transaction and reusable SQL command parameters. Due to the huge amount of records that needed to be inserted - the performance is lacking. Data aggregation is the easiest part, but it takes a huge amount of time just to insert 4 millions of rows, I can't even begin to imagine how the 240 million of data could be processed efficiently.. So far (and also strangely), ConcurrentBag approach seems to be the fastest on my PC. Followed by a ConcurrentDictionary approach. ConcurrentBag is a bit heavier on memory, though. Thanks to the work of #Alisson - it is now perfectly fine to use it for larger set of iterations!
So, you just need to be sure you'll have no more than 4 concurrent iterations, that's the limit of your computer resources and by using only this computer, there is no magic.
I created a class to control the concurrent execution and the number of concurrent tasks it will perform.
The class will hold these properties:
public class ConcurrentCalculationProcessor
{
private const int MAX_CONCURRENT_TASKS = 4;
private readonly IEnumerable<int> _codes;
private readonly List<Task<Dictionary<string, CalculationResult>>> _tasks;
private readonly Dictionary<string, CalculationResult> _combinationsReal;
public ConcurrentCalculationProcessor(IEnumerable<int> codes)
{
this._codes = codes;
this._tasks = new List<Task<Dictionary<string, CalculationResult>>>();
this._combinationsReal = new Dictionary<string, CalculationResult>();
}
}
I made the number of concurrent tasks a const, but it could be a parameter in the constructor.
I created a method to handle the processing. For test purposes, I simulated a loop through 900k itens, adding them to a dictionary, and finally returning them:
private async Task<Dictionary<string, CalculationResult>> ProcessCombinations()
{
Dictionary<string, CalculationResult> combinations = new Dictionary<string, CalculationResult>();
// do the processing
// here we should do something that worth using concurrency
// like querying databases, consuming APIs/WebServices, and other I/O stuff
for (int i = 0; i < 950000; i++)
combinations[i.ToString()] = new CalculationResult(new decimal[] { 1, 10, 15 });
return await Task.FromResult(combinations);
}
The main method will start tasks in parallel, adding them to a list of tasks, so we can keep track of them lately.
Everytime the list reaches the maximum concurrent tasks, we await a method called ProcessRealCombinations.
public async Task<Dictionary<string, CalculationResult>> Execute()
{
ConcurrentBag<Dictionary<string, CalculationResult>> results = new ConcurrentBag<Dictionary<string, CalculationResult>>();
for (int i = 0; i < this._codes.Count(); i++)
{
// start the task imediately
var task = ProcessCombinations();
this._tasks.Add(task);
if (this._tasks.Count() >= MAX_CONCURRENT_TASKS)
{
// if we have more than MAX_CONCURRENT_TASKS in progress, we start processing some of them
// this will await any of the current tasks to complete, them process it (and any other task which may have been completed as well)...
await ProcessCompletedTasks().ConfigureAwait(false);
}
}
// keep processing until all the pending tasks have been completed...it should be no more than MAX_CONCURRENT_TASKS
while(this._tasks.Any())
await ProcessCompletedTasks().ConfigureAwait(false);
return this._combinationsReal;
}
The next method ProcessCompletedTasks will wait for at least one of the existing tasks to complete. After that, it will take all the completed tasks from the list (that one which finished and any other which may have been finished together), and get the result of them (the combinations).
With each processedCombinations, it'll merge with this._combinationsReal (using the same logic you provided in your question).
private async Task ProcessCompletedTasks()
{
await Task.WhenAny(this._tasks).ConfigureAwait(false);
var completedTasks = this._tasks.Where(t => t.IsCompleted).ToArray();
// completedTasks will have at least one task, but it may have more ;)
foreach (var completedTask in completedTasks)
{
var processedCombinations = await completedTask.ConfigureAwait(false);
foreach (var pair in processedCombinations)
{
if (this._combinationsReal.ContainsKey(pair.Key))
this._combinationsReal[pair.Key].Update(pair.Value);
else
this._combinationsReal.Add(pair.Key, pair.Value);
}
this._tasks.Remove(completedTask);
}
}
For each processedCombinations merged in _combinationsReal, it will remove its respective task from the list, and move on (start adding more tasks again). This will happen until we have created all the tasks for all iterations.
Finally, we keep processing it, until there are no more tasks in the list.
If you monitor the RAM consumption, you'll notice it will increase to about 1.5 GB (when we have 4 tasks being processed concurrently), then decrease to about 0.8 GB (when we remove tasks from the list). At least this is what happened in my computer.
Here is a fiddle, however I had to decrease the number of itens from 900k to 100, because fiddle limits the memory usage to avoid abuse.
I hope this help you somehow.
One thing to notice about all this stuff, is that you will benefit from using concurrent tasks mostly if your ProcessCombinations (the method that is executed concurrently when processing those 900k items) calls external resources, like reading files from your HD, executing a query in a database, calling an API/WebService method. I guess that code is probably reading 900k items from an external resource, then this will reduce the time needed to process it.
If the items were previously loaded and ProcessCombinations is just reading data that was already in memory, then the concurrency won't help at all (actually I believe it would make your code ran slower). If that's the case, then we are applying concurrency in the wrong place.
Using async calls in parallel is likely to help more when said calls are going to access external resources (either to get or store data), and depending on how many concurrent calls that external resources can support, it may still not make such a difference.

C# Timing Loop Rate Inaccuracies

I'm trying to create a method in C# whereby I can repeatedly perform an action (in my particular application it's sending a UDP packet) at a targeted rate. I know that timer inaccuracy in C# will prevent the output from being precisely at the target rate, but best effort is good enough. However, at higher rates, the timing seems to be completely off.
while (!token.IsCancellationRequested)
{
stopwatch.Restart();
Thread.Sleep(5); // Simulate process
int processTime = (int)stopwatch.ElapsedMilliseconds;
int waitTime = taskInterval - processTime;
Task.Delay(Math.Max(0, waitTime)).Wait();
}
See here for the full example console app.
When run, the FPS output of this test app shows around 44-46 Hz for a target of 60 Hz. However at lower rates (say 20 Hz), the output rate is much closer to the target. I can't understand why this would be the case. What is the problem with my code?
The problem is that Thread.Sleep (or Task.Delay) is not very accurate. Take a look at this: Accuracy of Task.Delay
One way to fix this is to a start the timer once, and then have a loop where you delay some ~15 ms in each iteration. Inside each iteration, you calculate how much times the operation should have been executed so far and you compare it with how many times you have run it so far. And you then run the operation enough times to catch up.
Here is some code sample:
private static void timerTask(CancellationToken token)
{
const int taskRateHz = 60;
var stopwatch = new Stopwatch();
stopwatch.Start();
int ran_so_far = 0;
while (!token.IsCancellationRequested)
{
Thread.Sleep(15);
int should_have_run =
stopwatch.ElapsedMilliseconds * taskRateHz / 1000;
int need_to_run_now = should_have_run - ran_so_far;
if(need_to_run_now > 0)
{
for(int i = 0; i < need_to_run_now; i++)
{
ExecuteTheOperationHere();
}
ran_so_far += need_to_run_now;
}
}
}
Please note that you want to use longs instead of ints if the process is to remain alive for a very long time.
If you replace this:
Task.Delay(Math.Max(0, waitTime)).Wait();
with this:
Thread.Sleep(Math.Max(0, waitTime));
You should get closer values on higher rates (why would you use Task.Delay(..).Wait() anyway?).

Measure Execution time of Parallel.For

I am using a Parallel.For loop to increase execution speed of a computation.
I would like to measure the approximate time left for the computation. Normally one simply has to measure the time it takes for each step and estimate the total time by multiplying the step time by the total number of steps.
e.g., If there are 100 steps and some step takes 5 seconds then one could except that the total time would be about 500 seconds. (one could average over several steps and continuously report to the user which is what I want to do).
The only way I can think to do this is by using an outer for loop that essentially resorts back to the original way by splitting up the parallel.for interval and measuring each one.
for(i;n;i += step)
Time(Parallel.For(i, i + step - 1, ...))
This isn't a very good way in general because either a few number of very long steps or a large number of short steps cause problems with timing.
Anyone have any ideas?
(Please realize I need a real time estimation of the time it is taking the parallel.for to complete and NOT the total time. I want to let the user know how much time is left in execution).
This method seems to be pretty effective. We can "linearize" the parallel for loop by simply having each parallel loop increment a counter:
Parallel.For(0, n, (i) => { Thread.Sleep(1000); Interlocked.Increment(ref cnt); });
(Note, thanks to Niclas, that ++ is not atomic and one must use lock or Interlocked.Increment)
Each loop, running in parallel, will increment cnt. The effect is that cnt is monotonically increasing to n, and cnt/n is the percentage of how much the for is complete. Since there is no contention for cnt, there are no concurrency issues and it is very fast and very perfectly accurate.
We can measure the percentage of completion of the parallel For loop at any time during the execution by simply computing cnt/n
The total computation time can be easily estimated by dividing the elapsed time since the start of the loop with the percentage the loop is at. These two quantities should have approximately the same rates of change when each loop takes approximately the same amount of time is relatively well behaved (can average out small fluctuation too).
Obviously the more unpredictable each task is, the more inaccurate the remaining computation time will be. This is to be expected and in general, there is no solution (which is why it's called an approximation). We can still get the elapsed computation time or percentage with complete accuracy.
The underlying assumption of any estimation of "time left" algorithms is each sub task takes approximately the same computation time (assuming one wants a linear result). For example, if we have a parallel approach where 99 tasks are very quick and 1 task is very slow, our estimation will be grossly inaccurate. Our counter will zip up to 99 pretty quick then sit on the last percentage until the slow task completes. We could linearly interpolate and do further estimation to get a smoother countdown but ultimately there is a breaking point.
The following code demonstrates how to measure the parallel for efficiently. Note the time at 100% is the true total execution time and can be used as a reference.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
namespace ParallelForTiming
{
class Program
{
static void Main(string[] args)
{
var sw = new Stopwatch();
var pct = 0.000001;
var iter = 20;
var time = 20 * 1000 / iter;
var p = new ParallelOptions(); p.MaxDegreeOfParallelism = 4;
var Done = false;
Parallel.Invoke(() =>
{
sw.Start();
Parallel.For(0, iter, p, (i) => { Thread.Sleep(time); lock(p) { pct += 1 / (double)iter; }});
sw.Stop();
Done = true;
}, () =>
{
while (!Done)
{
Console.WriteLine(Math.Round(pct*100,2) + " : " + ((pct < 0.1) ? "oo" : (sw.ElapsedMilliseconds / pct /1000.0).ToString()));
Thread.Sleep(2000);
}
}
);
Console.WriteLine(Math.Round(pct * 100, 2) + " : " + sw.ElapsedMilliseconds / pct / 1000.0);
Console.ReadKey();
}
}
}
This is almost impossible to answer.
First of all, it's not clear what all the steps do. Some steps may be I/O-intensive, or computationally intensive.
Furthermore, Parallel.For is a request -- you are not sure that your code will actually run in parallel. It depends on circumstances (availability of threads and memory) whether the code will actually run in parallel. Then if you have parallel code that relies on I/O, one thread will block the others while waiting for the I/O to complete. And you don't know what other processes are doing either.
This is what makes predicting how long something will take extremely error-prone and, actually, an exercise in futility.
This problem is a tough one to answer. The problems with timing that you refer to using very long steps or a large number of very short steps are likley related to that your loop will be working at the edges of what the parallel partitioner can handle.
Since the default partitioner is very dynamic and we know nothing about your actual problem there is no good answer that allows you to solve the problem at hand while still reaping the benefits of parallel execution with dynamic load balancing.
If it is very important to achive a reliable estimation of projected runtime perhaps you could set up a custom partitioner and then leverage your knowledge about the partioning to extrapolate timings from a few chunks on one thread.
Here's a possible solution to measure the average of all previously finished tasks. After each task finishes, an Action<T> is called where you could summarize all times and divide it by the total tasks finished. This is however just the current state and has no way to predict any future tasks / averages. (As others mentioned, this is quite difficult)
However: You'll have to measure if it fits for your problem because there is a possibility for lock contention on both the method level declared variables.
static void ComputeParallelForWithTLS()
{
var collection = new List<int>() { 1000, 2000, 3000, 4000 }; // values used as sleep parameter
var sync = new object();
TimeSpan averageTime = new TimeSpan();
int amountOfItemsDone = 0; // referenced by the TPL, increment it with lock / interlocked.increment
Parallel.For(0, collection.Count,
() => new TimeSpan(),
(i, loopState, tlData) =>
{
var sw = Stopwatch.StartNew();
DoWork(collection, i);
sw.Stop();
return sw.Elapsed;
},
threadLocalData => // Called each time a task finishes
{
lock (sync)
{
averageTime += threadLocalData; // add time used for this task to the total.
}
Interlocked.Increment(ref amountOfItemsDone); // increment the tasks done
Console.WriteLine(averageTime.TotalMilliseconds / amountOfItemsDone + ms.");
/*print out the average for all done tasks so far. For an estimation,
multiply with the remaining items.*/
});
}
static void DoWork(List<int> items, int current)
{
System.Threading.Thread.Sleep(items[current]);
}
I would propose having the method being executed at each step report when it is done. This is slightly tricky with thread safety of course, so that is something to remember when implementing. This will let you keep track of number of finished tasks out of the total, and also makes it (sort of) easy to know the time spent on each individual step, which is useful to remove outliers etc.
EDIT: Some code to demonstrate the idea
Parallel.For(startIdx, endIdx, idx => {
var sw = Stopwatch.StartNew();
DoCalculation(idx);
sw.Stop();
var dur = sw.Elapsed;
ReportFinished(idx, dur);
});
The key here is that ReportFinished will give you continuous information about number of finished tasks, and the duration of each of them. This enables you to do some better guesses about how long time remains by doing statistics on this data.
Here i wrote class that mesures time and speed
public static class Counter
{
private static long _seriesProcessedItems = 0;
private static long _totalProcessedItems = 0;
private static TimeSpan _totalTime = TimeSpan.Zero;
private static DateTime _operationStartTime;
private static object _lock = new object();
private static int _numberOfCurrentOperations = 0;
public static void StartAsyncOperation()
{
lock (_lock)
{
if (_numberOfCurrentOperations == 0)
{
_operationStartTime = DateTime.Now;
}
_numberOfCurrentOperations++;
}
}
public static void EndAsyncOperation(int itemsProcessed)
{
lock (_lock)
{
_numberOfCurrentOperations--;
if (_numberOfCurrentOperations < 0)
throw new InvalidOperationException("EndAsyncOperation without StartAsyncOperation");
_seriesProcessedItems +=itemsProcessed;
if (_numberOfCurrentOperations == 0)
{
_totalProcessedItems += _seriesProcessedItems;
_totalTime += DateTime.Now - _operationStartTime;
_seriesProcessedItems = 0;
}
}
}
public static double GetAvgSpeed()
{
if (_totalProcessedItems == 0) throw new InvalidOperationException("_totalProcessedItems is zero");
if (_totalProcessedItems == 0) throw new InvalidOperationException("_totalTime is zero");
return _totalProcessedItems / (double)_totalTime.TotalMilliseconds;
}
public static void Reset()
{
_totalProcessedItems = 0;
_totalTime = TimeSpan.Zero;
}
}
Example of usage and test:
static void Main(string[] args)
{
var st = Stopwatch.StartNew();
Parallel.For(0, 100, _ =>
{
Counter.StartAsyncOperation();
Thread.Sleep(100);
Counter.EndAsyncOperation(1);
});
st.Stop();
Console.WriteLine("Speed correct {0}", 100 / (double)st.ElapsedMilliseconds);
Console.WriteLine("Speed to test {0}", Counter.GetAvgSpeed());
}

Measuring the execution time in multithreading environment

I have an application that uses Task (TPL) objects for asynchronous execution.
The main thread waits for a trigger (some TCP packet) and then executes several tasks. What I want to do is to measure the time spent in the tasks.
Take a look at the code. I have some lengthy operation (Generator), enclosed in Stopwatch's start/stop.
Task.Factory.StartNew((t) => {
Stopwatch sw = new Stopwatch();
sw.Start();
Generator g = new Generator();
g.GenerateIntervals(); // lengthy operation
sw.Stop();
GlobalStopwatch.Add(sw.Elapsed);
});
Here is the problem. Stopwatch uses DateTime.UtcNow.Ticks at the moment of Start() and then again at the moment of Stop(). Then it subtracts those two to get the elapsed time.
The thing is, some other thread (in a single-threaded system) can get some processor time while the Generator (from the code) is doing its GenerateIntervals() lengthy operation. That means that the elapsed time recorded by the stopwatch would contain not only the Generaor.GenerateIntervals() time, but also the time that the other threads did their job inbetween.
Is there any simple way to know exactly how much of processor time did some method take, not including execution time from other threads as a result of timesharing mechanisms?
The answer to your question is "No"... No, you cannot measure the accumulated time ON THE CPU for a particular thread.
(Side-rant: I really wish people would read the question and understand it before answering!!!)
Ok, back to your question... the most accurate thing you could do would be to spin off a separate process for each of your tasks, and then measure the CPU time for the process (which can be done in .Net)... but that's overkill.
If you need help on how to do that, you should ask another question specifically for that.
Here is nice Article . You can use it or you can compare those times using in-built performance analyzer in VS2010.
You could use the Windows API QueryPerformanceCounter() and QueryPerformanceFrequency() methodsto retrieves the number of milliseconds that have elapsed since the timer was started.
using System;
using System.Runtime.InteropServices;
using System.ComponentModel;
using System.Threading;
namespace Win32
{
internal class WinTimer
{
[DllImport("Kernel32.dll")]
private static extern bool QueryPerformanceCounter(
out long lpPerformanceCount);
[DllImport("Kernel32.dll")]
private static extern bool QueryPerformanceFrequency(
out long lpFrequency);
private long startTime, stopTime;
private long freq;
// Constructor
public HiPerfTimer()
{
startTime = 0;
stopTime = 0;
if (QueryPerformanceFrequency(out freq) == false)
{
// high-performance counter not supported
throw new Win32Exception();
}
}
// Start the timer
public void Start()
{
// lets do the waiting threads there work
Thread.Sleep(0);
QueryPerformanceCounter(out startTime);
}
// Stop the timer
public void Stop()
{
QueryPerformanceCounter(out stopTime);
}
// Returns the duration of the timer (in seconds)
public double Duration
{
get
{
return (double)(stopTime - startTime) / (double) freq;
}
}
}
}
In fact the answer is YES (but you need to use interop).
There is a WINAPI function which is called QueryThreadCycleTime and does exactly this:
"Retrieves the cycle time for the specified thread."

Categories

Resources