Thread Safe Method with a Stop Watch - c#

I'm trying to determine if my code I'm using is Thread Safe or not. I'm basically trying to call a method several times from different threads, and capture the time it takes for certain calls within the method to complete.
Here is an example of what I am doing.
using System;
using System.Collections.Concurrent;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
namespace ThreadTest
{
class Program
{
static BlockingCollection<TimeSpan> Timer1 = new BlockingCollection<TimeSpan>(new ConcurrentBag<TimeSpan>());
static TimeSpan CaptureTime(Action action)
{
Stopwatch stopwatch = Stopwatch.StartNew();
action();
stopwatch.Stop();
return stopwatch.Elapsed;
}
static void ThreadFunction()
{
TimeSpan timer1 = new TimeSpan();
timer1 = CaptureTime(() =>
{
//Do Some Work
});
Timer1.Add(timer1);
}
static void Main(string[] args)
{
for (int i = 0; i < 50; i++)
{
var task = new Task(ThreadFunction);
task.Start();
}
}
}
}
And what I'm trying to determine is whether or not the TimeSpan values returned by the CaptureTime method can be trusted.
Thank you to anyone who can enlighten me.

Use of Stopwatch here is not the problem. See this recent answer. Since you are in a single thread when you use the Stopwatch, it will work fine.
But I'm not sure this approach is really going to be very useful. Are you trying to create your own profiler? Why not just use existing profiling tools?
When you spin up 50 instances of the same operation, they're bound to fight for the same CPU resources. Also, a new Task might or might not spin up a new thread. Even then, the amount of switching involved would make the results less-than-meaningful. Unless you are specifically trying to observe parallel behavior, I would avoid this approach.
The better way would be to run the action 50 times sequentially, time the whole thing, then divide by 50. (Assuming this is a short-running task.)
The using of BlockingCollection<TimeSpan>(new ConcurrentBag<TimeSpan>()) is also very weird. Since you are just adding to the list, and it is static and pre-created, then you can just use List<TimeSpan>. See the notes on Thread Saftey in the List<T> documentation here.
Ignore that. I misunderstood the context of the docs. Your code is just fine, and is indeed thread-safe. Thanks to Jim and Alexi for clearing that up.

They can be 'trusted' alright but that does not mean they will be very accurate.
It depends on lots of factors but basically you would want to measure a large number of calls to action() (on the same thread) and average them. Especially when a single call takes a relatively short time ( <= 1 ms)
You will still have to deal with external factors, Windows is not a Real Time OS.

Related

What is the minimum wait time to use ManualResetEventSlim instead of ManualResetEvent?

From NET 4 I can use the ManualResetEventSlim class that make a little spinning before blocking in order to get a time optimization if the blocking time is little (I have no context switch).
I'd like to measure using a benchmark how little is this time in order to know, more or less, the amount of wait time necessary to prefer using a ManualResetEventSlim instead of a classic ManualResetEvent.
I know that this measure is CPU dependent, it is impossible to know a priori the Spin time, but I'd like to have an order of magnitude.
I wrote a benchmark class in order to get the minimum MillisecondSleep that make ManualResetEventSlim better than ManualResetEvent.
public class ManualResetEventTest
{
[Params(0, 1, 10)]
public int MillisecondsSleep;
[Benchmark]
public void ManualResetEventSlim()
{
using var mres = new ManualResetEventSlim(false);
var t = Task.Run(() =>
{
mres.Wait();
});
Thread.Sleep(MillisecondsSleep);
mres.Set();
t.Wait();
}
[Benchmark]
public void ManualResetEvent()
{
using var mres = new ManualResetEvent(false);
var t = Task.Run(() =>
{
mres.WaitOne();
});
Thread.Sleep(MillisecondsSleep);
mres.Set();
t.Wait();
}
}
And the result is the following
As you can see I found a improved performance only using a Thread.Sleep(0). Furthermore I see a 15ms mean time with both 1 and 10 ms.
Am I missing something?
Is it true that only with the 0 ms wait it is better to use a ManualResetEventSlim instead of ManualResetEvent?
From the excellent C# 9.0 in a Nutshell book:
Waiting or signaling an AutoResetEvent or ManualResetEvent takes about one microsecond (assuming no blocking).
ManualResetEventSlim and CountdownEvent can be up to 50 times faster in short-wait scenarios because of their nonreliance on the OS and judicious use of spinning constructs. In most scenarios, however, the overhead of the signaling classes themselves doesn't create a bottleneck; thus, it is rarely a consideration.
Hopefully that's enough to give you a rough order of magnitude.

Run a few simple tasks at once seems to not work together

I went to a problem that the output of each tasks is not as real as it should be if we look at the console write order.
using System;
using System.Threading.Tasks;
namespace randomTaskTest
{
class Program
{
public static void foo()
{
Random rnd = new Random((int)DateTime.Now.Ticks); // I know that I should not cast long to int
int target = 3;
int currentNumber = rnd.Next(1, 100);
int tryCounter = 0;
while(currentNumber!=target)
{
currentNumber = rnd.Next(1, 100);
tryCounter++;
}
Console.WriteLine(tryCounter);
rnd = null;
}
static void Main(string[] args)
{
for (int i = 0; i < 30; i++)
Task.Run(() => { foo(); });
Console.ReadKey();
}
}
}
I put there a 30 tasks that stay in loop until the Random class will find the correct number in range of <1;100>. In the perfect world the first output entries in console window should be sorted ascending, but it's not.
My way of understanding this code is that when a random number has been found, then it should be written it in console window asap, because the luckiest random instances leaves the loop as first.
The output is something like:
3
18
7
30
instead of
3
7
18
30
etc.
This is impossible to avoid when the Task creation time is longer than the Task execution time, is it? At least I think that this task takes less time to end than the task time to create it.
The Task API is using the ThreadPool (or additional threads) in the background. Which thread exactly is getting executed when and how long depends on how the operating system schedules the threads in the background.
For example: Let's say you have two cores, and you're running only two tasks. Both tasks are being executed, and both are at 5 trials. Now some system process from the background needs to do some calculation, and therefore the operating system has to pause one of the two tasks that are running in parallel. The other tasks now continues to, lets say, 500 trials and prints them out before the system process that needed the time clears the way for the other task again, which then immediately hits the target and prints out 6.
This is called a race condition, and you shouldn't rely on something finishing earlier than something else, because the OS is doing so much in the background that you can never say which thread/task is going to be executed at which exact time. It's a frequent source of errors that are hard to find. :-)
Also, your processor can only execute as many tasks as it has cores at the exact same time. When more tasks are emitted than cores are available, the API either waits for some of them to complete or some are paused and then resumed later. This also leads to irregular processor time for each task.

When is Parallel.Invoke useful?

I'm just diving into learning about the Parallel class in the 4.0 Framework and am trying to understand when it would be useful. At first after reviewing some of the documentation I tried to execute two loops, one using Parallel.Invoke and one sequentially like so:
static void Main()
{
DateTime start = DateTime.Now;
Parallel.Invoke(BasicAction, BasicAction2);
DateTime end = DateTime.Now;
var parallel = end.Subtract(start).TotalSeconds;
start = DateTime.Now;
BasicAction();
BasicAction2();
end = DateTime.Now;
var sequential = end.Subtract(start).TotalSeconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
static void BasicAction2()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction2, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
There is no noticeable difference in time of execution here, or am I missing the point? Is it more useful for asynchronous invocations of web services or...?
EDIT: I removed the DateTime with Stopwatch, removed the write to the console with a simple addition operation.
UPDATE - Big Time Difference Now: Thanks for clearing up the problems I had when I involved Console
static void Main()
{
Stopwatch s = new Stopwatch();
s.Start();
Parallel.Invoke(BasicAction, BasicAction2);
s.Stop();
var parallel = s.ElapsedMilliseconds;
s.Reset();
s.Start();
BasicAction();
BasicAction2();
s.Stop();
var sequential = s.ElapsedMilliseconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
Thread.Sleep(100);
}
static void BasicAction2()
{
Thread.Sleep(100);
}
The test you are doing is nonsensical; you are testing to see if something that you can not perform in parallel is faster if you perform it in parallel.
Console.Writeline handles synchronization for you so it will always act as though it is running on a single thread.
From here:
...call the SetIn, SetOut, or SetError method, respectively. I/O
operations using these streams are synchronized, which means multiple
threads can read from, or write to, the streams.
Any advantage that the parallel version gains from running on multiple threads is lost through the marshaling done by the console. In fact I wouldn't be surprised to see that all the thread switching actually means that the parallel run would be slower.
Try doing something else in the actions (a simple Thread.Sleep would do) that can be processed by multiple threads concurrently and you should see a large difference in the run times. Large enough that the inaccuracy of using DateTime as your timing mechanism will not matter too much.
It's not a matter of time of execution. The output to the console is determined by how the actions are scheduled to run. To get an accurate time of execution, you should be using StopWatch. At any rate, you are using Console.Writeline so it will appear as though it is in one thread of execution. Any thing you have tried to attain by using parallel.invoke is lost by the nature of Console.Writeline.
On something simple like that the run times will be the same. What Parallel.Invoke is doing is running the two methods at the same time.
In the first case you'll have lines spat out to the console in a mixed up order.
Method=BasicAction2, Thread=6, i=9776
Method=BasicAction, Thread=10, i=9985
// <snip>
Method=BasicAction, Thread=10, i=9999
Method=BasicAction2, Thread=6, i=9777
In the second case you'll have all the BasicAction's before the BasicAction2's.
What this shows you is that the two methods are running at the same time.
In ideal case (if number of delegates is equal to number of parallel threads & there are enough cpu cores) duration of operations will become MAX(AllDurations) instead of SUM(AllDurations) (if AllDurations is a list of each delegate execution times like {1sec,10sec, 20sec, 5sec} ). In less idealcase its moving in this direction.
Its useful when you don't care about the order in which delegates are invoked, but you care that you block thread execution until every delegate is completed, so yes it can be a situation where you need to gather data from various sources before you can proceed (they can be webservices or other types of sources).
Parallel.For can be used much more often I think, in this case its pretty much required that you got different tasks and each is taking substantial duration to execute, and I guess if you don't have an idea of possible range of execution times ( which is true for webservices) Invoke will shine the most.
Maybe your static constructor requires to build up two independant dictionaries for your type to use, you can invoke methods that fill them using Invoke() in parallel and shorten time 2x if they both take roughly same time for example.

recursively calling method (for object reuse purpose)

I have a rather large class which contains plenty of fields (10+), a huge array (100kb) and some unmanaged resources. Let me explain by example
class ResourceIntensiveClass
{
private object unmaganedResource; //let it be the expensive resource
private byte[] buffer = new byte[1024 * 100]; //let it be the huge managed memory
private Action<ResourceIntensiveClass> OnComplete;
private void DoWork(object state)
{
//do long running task
OnComplete(this); //notify callee that task completed so it can reuse same object for another task
}
public void Start(object dataRequiredForCurrentTask)
{
ThreadPool.QueueUserWorkItem(DoWork); //initiate long running work
}
}
The problem is that the start method never returns after the 10000th iteration causing a stack overflow. I could execute the OnComplete delegate in another thread giving a chance for the Start method to return, but it requires using extra cpu time and resources as you know. So what is the best option for me?
Is there a good reason for doing your calculations recursively? This seems like a simple loop would do the trick, thus obviating the need for incredibly deep stacks. This design seems especially problematic as you are relying on main() to setup your recursion.
recursive methods can get out of hand quite fast. Have you looked into using Parallel Linq?
you could do something like
(your Array).AsParallel().ForAll(item => item.CallMethod());
you could also look into the Task Parallel Library (TPL)
with tasks, you can define an action and a continue with task.
The Reactive Framework (RX) on the other hand could handle these on complete events in an async manner.
Where are you changing the value of taskData so that its length can ever equal currentTaskIndex? Since the tasks you are assigning to the data are never changing, they are being carried out forever...
I would guess that the problem arises from using the pre-increment operator here:
if(c.CurrentCount < 10000)
c.Start(++c.CurrentCount);
I am not sure of the semantics of pre-increment in C#, perhaps the value passed to a method call is not what you expect.
But since your Start(int) method assigns the value of the input to this.CurrentCount as it's first step anyway, you should be safe replacing this with:
if(c.CurrentCount < 10000)
c.Start(c.CurrentCount + 1);
There is no point in assigning to c.CurrentCount twice.
If using the threadpool, I assume you are protecting the counters (c.CurrentCount), otherwise concurrent increments will cause more activity, not just 10000 executions.
There's a neat tool called a ManualResetEvent that could simplify life for you.
Place a ManualResetEvent in your class and add a public OnComplete event.
When you declare your class, you can wire up the OnComplete event to some spot in your code or not wire it up and ignore it.
This would help your custom class to have more correct form.
When your long process is complete (I'm guessing this is in a thread), simply call the Set method of the ManualResetEvent.
As for running your long method, it should be in a thread that uses the ManualResetEvent in a way similar to below:
private void DoWork(object state)
{
ManualResetEvent mre = new ManualResetEvent(false);
Thread thread1 = new Thread(
() => {
//do long running task
mre.Set();
);
thread1.IsBackground = true;
thread1.Name = "Screen Capture";
thread1.Start();
mre.WaitOne();
OnComplete(this); //notify callee that task completed so it can reuse same object for another task
}

Easiest way to make a single statement async in C#?

I've got a single statement running on a ASP.NET page that takes a long time (by long I mean 100ms, which is too long if I want this page to be lightning fast) , and I don't care when it executes as long as executes.
What is the best (and hopefully easiest) way to accomplish this?
The easiest way is probably to get it to execute in the threadpool. For example, to make this asynchronous:
using System;
using System.Threading;
class Test
{
static void ExecuteLongRunningTask()
{
Console.WriteLine("Sleeping...");
Thread.Sleep(1000);
Console.WriteLine("... done");
}
static void Main()
{
ExecuteLongRunningTask();
Console.WriteLine("Finished");
Console.ReadLine();
}
}
Change the first line of Main to:
ThreadPool.QueueUserWorkItem(x => ExecuteLongRunningTask());
Be aware that if you pass any arguments in, they'll be captured variables - that could have subtle side-effects in some cases (particularly if you use it in a loop).
If it's 100ms, then don't bother. Your users can't detect a 100ms delay.
Edit: some explanations.
If I remember correctly, 100 milliseconds (1/10 second) is near the minimum amount of time that a human can perceive. So, for the purpose of discussion, let me grant that this 100ms can be perceived by the users of the OP's site, and that it is worthwhile to improve performance by 100ms. I assumed from the start that the OP had correctly identified this "long-running" task as a potential source of a 100ms improvement. So, why did I suggest he ignore it?
Dealing with multiple threads properly is not easy, and is a source of bugs that are difficult to track down. Adding threads to a problem is usually not a solution, but is rather a source of other problems you simply don't find right away (*).
I had the opportunity once to learn this the hard way, with a bug that could only be reproduced on the fastest eight-cpu system available at the time, and then only by pounding on the thing relentlessly, while simulating a degree of network failure that would have caused the network administrators to be lined up and shot, if it had happened in real life. The bug turned out to be in the Unix OS kernel handling of signals, and was a matter of the arrangement of a handful of instructions.
Granted I've never seen anything that bad since then, I've still seen many developers tripped up by multithreading bugs. This question, seemed to be, on the one hand, asking for an "easy way out" via threading, and on the other hand, the benefit was only 100ms. Since it did not appear that the OP already had a well-tested threading infrastructure, it seemed to me that it made better sense to ignore the 100ms, or perhaps to pick up performance some other way.
(*) Of course, there are many circumstances where an algorithm can profitably be made parallel and executed by multiple threads, running on multiple cores. But it does not sound like the OP has such a case.
There's a background worker thread that can be very useful when processing information in the background of a .NET application. A bit more context would help answer the question better.
// This is the delegate to use to execute the process asynchronously
public delegate void ExecuteBackgroundDelegate();
//
[STAThread]
static void Main(string[] args)
{
MyProcess proc = new MyProcess();
// create an instance of our execution delegate
ExecuteBackgroundDelegate asynchExec = new ExecuteBackgroundDelegate(proc.Execute);
// execute this asynchronously
asynchExec.BeginInvoke(null, null);
}
If you mean to say that the response can be sent before the command is ran, you could use ThreadPool.QueueUserWorkItem to run it on another thread without blocking your request.
you can use a ThreadPool.
ThreadPool.QueueUserWorkItem( o => Thread.Sleep(1000) /*your long task*/ );
class Test
{
void LongRunningTask()
{
Console.WriteLine("Sleeping...");
Thread.Sleep(10000);
Console.WriteLine("... done");
}
static void Main()
{
Test t = new Test();
new Action(() => t.LongRunningTask()).BeginInvoke(null, t);
}
}
I wouldn't bother with any of that threading stuff for a 100ms delay. This will do fine:
protected void Page_Unload(object sender, EventArgs e)
{
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Close();
// Your code here
}
(copied from my earlier answer for this question)
Since the connection to the client downloading your page will be closed, their browser will stop displaying a loading message. Your code will continue execution normally however.

Categories

Resources