Reading this article I found several ways to call a method.
Method to call:
public static void SendData(string value) { }
Calls:
delegate void MyDelegate(string value);
//Slow method - NOT RECOMMENDED IN PRODUCTION!
SendData("Update");
// Fast method - STRONGLY RECOMMENDED FOR PRODUCTION!
MyDelegate d = new MyDelegate(SendData);
d.BeginInvoke("Update", null, null);
Is it true? Is it faster?
Action send = () => Send("Update");
send();
Or maybe this?
I need to call a method into a SQL CLR trigger with maximum performance so even small speed increase makes sense.
Which is "faster"?
1) Ask Bob to mow your lawn. Wait until he's done. Then go to the mall.
2) Ask Bob to mow your lawn. Go to the mall while he's mowing your lawn.
The second technique gets you to the mall a lot faster. The price you pay is that you have no idea whether the lawn is going to be mowed by the time you get home or not. With the first technique, you know that when you get home from the mall the lawn will be mowed because you waited until it was before you left in the first place. If your logic depends on knowing that the lawn is mowed by the time you get back then the second technique is wrong.
Now the important bit: Obviously neither technique gets your lawn mowed faster than the other. When you're asking "which is faster?" you have to indicate what operation you're measuring the speed of.
Using a delegate is no faster than directly calling the method (in all reality, creating a delegate and then calling it would be more expensive).
The reason that this is going to seem faster is because directly calling the method blocks the executing thread while the method runs. Your delegate example calls the method asynchronously (using BeginInvoke) so the calling thread continues to execute while the method is executed.
Also, whenever you have a call to BeginInvoke on a delegate you should also have the corresponding EndInvoke, which you're missing in your example:
Is EndInvoke() optional, sort-of optional, or definitely not optional?
and
IanG on Tap: EndInvoke Not Optional
Its a placebo speed improvement from the point of view of when SendData is returning to the caller. BeginInvoke will take a ThreadPool thread and start the method on that thread, then return to caller immediately - the actual work is on another thread. The time it takes to do this work will remain the same regardless of the thread its on. It might improve the responsiveness of your application, depending on the work, but delegates are not faster than direct method calls - as I say, in your situation it seems faster because it returns immediately.
Try this: change BeginInvoke to Invoke - the caller is now blocking, the same as calling SendData normally.
Assuming the code comments are not yours (ie, "RECOMMENDED FOR PRODUCTION") I would fast find the developer responsible and make sure they are aware of Delegate.BeginInvoke and the fact that they are making their app multi-threaded without realising it...
To answer the question, a direct method call is always the fastest way - delegates or reflection incur overhead.
Your best chance to increase performance would be to optimize the code in the method that will be in the SQL CLR stored procedure that the trigger will call. Could you post more information about that?
Note that in the article you cite, the author is talking about WCF calls, notably calls for inserting and updating a database.
The keys points to note in that specific case are:
The work is being done by another machine.
The only information you are getting back is "Success!" (usually) or (occasionally) "Failure" (which the author doesn't seem to care about)
Hence, in that specific case, the background call were better. For general purpose use, direct calls are better.
Related
Suppose, I have a simple class, with async method in it:
public class Writer
{
public Task WriteAsync(string message);
}
This is internal class, which is absolutely negligible for application's business logic.
The main idea of the method, is that when it's called - method must immediately returns control to the calling method, to avoid any possible delay in important, full of business logic calling method (delay for calling that method is possible of course).
This method calls in different places, very often. and we don't really care if it's successful or won't write last messages in case of unexpectable situation. That's fine.
So, the question is, how can I call WriteAsync to avoid any possible delays in calling method?
I thought about Task.Run(() => WriteAsync(message)) (without await! we don't need to wait this!), but won't that fill my thread pool with a lot of useless work? And it's quite onerously writing everywhere such code...
You may queue the writes and process the queue, i.e. perform the writing, on a dedicated background thread. This is kind of what happens when you call Task.Run, i.e. you queue up delegates in the thread pool. If you require more control, you may for example use a BlockingCollection<T>.
There is an example of how to use a BlockingCollection<T> to read and write items concurrently available on MSDN.
Using this approach, calling WriteAsync will only block for the time it takes to add the message to the queue and this time should be negligible.
Because the method is asynchronous then, by definition, it is already returning control to the caller immediately. If the implementation of that method isn't actually asynchronous, then it should either not return a Task, not have Async in the name, and make it clear to callers that it's synchronous, or it should fix the bug in its implementation that makes it block the caller for an extended period of time. Callers of the method will rightfully expect that, being an asynchronous method, it will return control to the caller immediately by just calling the method normally. If the method has a bug in it that makes it not do that, you shouldn't work around that bug and have callers treat it as a synchronous method when it claims it isn't.
I have a doubt that is there any performance gain if I use Async feature in Data Access Layer as below :
public async Task<IEnumerable<TrnApplicant>> GetAllMemberApplicantsAsync(String webReferenceNumber)
{
using (var context = new OnlineDataContext())
{
var applicant = await Task.Run(() => context.Applicants.First(
app => app.RefNo.Equals(webReferenceNumber, StringComparison.OrdinalIgnoreCase)) );
return GetApplicantsInGroup(applicant.ApplicantsGroupId);
}
}
Also if not when does it make more sense?
Consider this.
You call someone and ask them to do something for you. While they do it, you wait on the line, waiting for them to say "It's done". This is synchronous work. The act of calling them "blocks" until they're done with the job, and then you can get back to whatever you were doing, afterwards.
The alternative, asynchronous way, would be for you to make the call, but instead of waiting on the phone you hang up, and do something else while they work. Once they call you back, saying "It's done", you go back to doing whatever you needed their results to do.
Now, if you have nothing to do while you wait, there will be absolutely no performance gain, instead quite the opposite. The overhead of having the other party call you back will instead add to the total amount of work to do.
So with the above description of asynchronous work, do you have anything your code could do while it waits for the data access layer to complete its job?
If not, then no, there would be no performance gain.
If yes, then there could be performance gains.
Now, having said all that, I read the comment below my answer, and then I re-read your code a bit more carefully, and I believe that you are not really taking advantage of proper asynchronous code here.
The best way would be for you to use some kind of system or code that does asynchronous I/O properly. Your code calls Task.Run, which is actually the same as just plunking a different person down in front of the phone, doing the waiting for you.
For instance, consider SqlCommand, which may be the actual code doing the talking to the database here, it has two interesting methods:
SqlCommand.ExecuteReader
SqlCommand.BeginExecuteReader
Now, if you call the first one, on a thread created with Task.Run, you are in effect still blocking, you're just asking someone else to do it for you.
The second, however, is as I described above.
So in your particular case I would try to avoid using Task.Run. Now, depending on the load of your server, there may be advantageous to do it like that, but if you can I would switch to using the asynchronous methods of the underlying objects to do it properly.
I'll try to clarify my question :
I have a function called Draw (which someone (XNA) calls her 60 times per second),
and i have many objects to draw, so i have the following code :
void Draw()
{
obj1.draw();
obj2.draw();
obj3.draw();
....
}
Is there will be a performance impact if instead i will create an event, which will be raised by Draw(), and all the objects would sign up for the event ?
In case i wansn't clear what i'm asking is :
Does a call to a function by signing up to an event is different from a regular call ?
Regarding performance, I think Jon Skeet's example is pretty conclusive that delegates don't add any significant overhead to performance, and may even improve it.
One factor you do need to consider with events/delegates is handling unhooking objects listening to an event or your reference counts will not reset properly and you would find yourself with memory leaks. Avoid anonymous methods unless you're prepared to store references to them so they can be unwired on Dispose etc.
The ildasm shows that a direct call of function is performed with "call method" command, while call through event performed with "callvirt delegatename::Invoke()". May seem that direct call should be faster, but let's consider what is Invoke(). Invoke is not a member of Delegate or MulticastDelegate classes. It's a special method generated by compiler
.method public hidebysig virtual instance void
Invoke(string s) runtime managed
{
}
This method doesn't contain any implementation, that might looks strange. But if we pay attention on "runtime" specificator, magic will dissipates. The "runtime" means that code will be generated at runtime, and as we know, it will happens only once. So theoretically both should be the same in terms of productivity.
As for Jon Skeet's test, I launch it several times and interchange direct call with call with the help of delegate, and didn't get confirmation that delegates improve perfomance. Sometimes delegates won, sometimes direct call won. I think that it's because of GC or something else inside .NET affects test, or just switching processes by Windows.
Refer the thread-safe call tutorial at MSDN, have a look at following statments:
// InvokeRequired required compares the thread ID of the
// calling thread to the thread ID of the creating thread.
// If these threads are different, it returns true.
if (this.textBox1.InvokeRequired) {
SetTextCallback d = new SetTextCallback(SetText);
this.Invoke(d, new object[] { text });
} else {
this.textBox1.Text = text;
}
Of course, I've used it many times in my codes, and understand a little why to use it.
But I still have some unclear questions about those statements, so anybody help me to find them out, please.
The questions are:
Will the code can run correctly with the statements in the if body only? I tried and seems it just cause the problem if the control is not initialize completely. I don't know is there more problem?
Which the advantage of calling method directly (else body) instance via invoker? Does it save resource (CPU, RAM) or something?
Thanks!
You can of course always call using the Invoker, but:
It usually makes the code more verbose and difficult to read.
It is less efficient as there are several extra layers to contend with (setting up delegates, calling the dispatcher and so on).
If you are sure you'll always be on the GUI thread, you can just ignore the above checks and call directly.
If you always run just the first part of the if statement, it will always be fine, as Invoke already checks if you're on the UI thread.
The reason you don't want to do this is that Invoke has to do a a lot of work to run your method, even if you're already on the right thread. Here's what it has to do (extracted from the source of Control.cs):
Find the marshaling control via an upward traversal of the parent control chain
Check if the control is an ActiveX control and, if so, demand unmanaged code permissions
Work out if the call needs to be invoked asynchronously to avoid potential deadlock
Take a copy of the calling thread's execution context so the same security permissions will be used when the delegate is finally called
Enqueue the method call, then post a message to invoke the method, then wait (if synchronous) until it completes
None of the steps in the second branch are required during a direct call from the UI thread, as all the preconditions are already guaranteed, so it's definitely going to be faster, although to be fair, unless you're updating controls very frequently, you're very unlikely to notice any difference.
Andreas Huber's answer to this question gave me an idea to implement Concurrent<T> with async delegates instead of the ThreadPool. However, I am finding it harder to understand what's going on when an AsyncCallback is passed to BeginInvoke, especially when several threads have access to IAsyncResult. Unfortunately, this case doesn't seem to be covered at MSDN or anywhere I could find. Moreover, all articles I could find were either written before closures and generics were available or just seem that way. There are several questions (and the answers which I hope are true, but I am ready to be disappointed):
1) Would using a closure as an AsyncCallback make any difference?
(Hopefully not)
2) If a thread waits on the AsyncWaitHandle, will it be signaled
a) before the callback starts or
b) after it finishes?
(Hopefully b)
3) While the callback is running, what will IsCompleted return? Possibilities I can see:
a) true;
b) false;
c) false before the callback calls EndInvoke, true after.
(Hopefully b or c)
4) Will DisposedObjectException be thrown if some thread waits on the AsyncWaitHandle after EndInvoke is called?
(Hopefully not, but I expect yes).
Provided the answers are as I hope, this seems like it should work:
public class Concurrent<T> {
private IAsyncResult _asyncResult;
private T _result;
public Concurrent(Func<T> f) { // Assume f doesn't throw exceptions
_asyncResult = f.BeginInvoke(
asyncResult => {
// Assume assignment of T is atomic
_result = f.EndInvoke(asyncResult);
}, null);
}
public T Result {
get {
if (!_asyncResult.IsCompleted)
// Is there a race condition here?
_asyncResult.AsyncWaitHandle.WaitOne();
return _result; // Assume reading of T is atomic
}
...
If the answers to the questions 1-3 are the ones I hope for, there should be no raace condition here, as far as I can see.
Question 1
I think part of the problem is misconception. IAsyncResult is not accessed from multiple threads unless you explicitly pass it to one. If you look at the implementation for mos Begin*** style API's in the BCL, you'll notice the IAsyncResult is only ever created and destroyed from the thread where the Begin*** or End*** call actually occur.
Question 2
AsyncWaitHandle should be signaled after the operation is 100% complete.
Question 3
IsCompleted should return true once the underlying operation is complete (no more work to do). The best way to view IsComplete is that if the value is
true -> Calling End*** will return immediately
false -> Callind End*** will block for some period of time
Question 4
This is implementation dependent. There is no way to really give a blanket answer here.
Samples
If you are interested in an API which allows you to easily run a delegate on another thread and access the result when finished, check out my RantPack Utility Library. It's available in source and binary form. It has a fully fleshed out Future API which allows for the concurrent running of delegates.
Additionally there is an implementation of IAsyncResult which covers most of the questions in this post.
I've been looking into async calls just recently. I found a pointer to an article with an example implementation of an IAsyncResult by respected author Jeffrey Richter. I learned a lot about how async calls work by studying this implementation.
You might also see if you can download and examine the source code for the System.Runtime.Remoting.Messaging.AsyncResult you're specifically concerned with. Here's a link to instructions on how to do this in Visual Studio.
To add a bit to JaredPar's good answers...
1: I believe if you define a closure which can be assigned to a variable of type AsyncCallback (takes an IAsyncResult and returns void) it should work as you would expect a closure to work as that delegate, but I'm not sure if there could be scope issues. The originating local scope should have returned long before the callback gets invoked (that's what makes it an asynchronous operation), so bear that in mind with respect to references to local (stack) variables and how that will behave. Referencing member variables should be fine, I would think.
2: I think from your comment that you may have misunderstood the answer to this one. In Jeffrey Richter's example implementation the wait handle is signaled before the callback is invoked. If you think about it, it has to be this way. Once it invokes the callback it loses control of the execution. Suppose the callback method throws an exception.... execution could unwind back past the method which invoked the callback and thus prevent it from ever later signaling the wait handle! So the wait handle needs to be signalled before callback is invoked. They're also much closer in time if done in that order than if it signals the wait handle only after the callback has returned.
3: As JaredPar says, IsCompleted should be returning true before the callback and before the wait handle is signaled. This makes sense because if IsCompleted is false you would expect the call to EndInvoke to block, and the whole point of the wait handle (as with the callback) is to know when the result is ready and it won't block. So, first IsCompleted is set to true, then the wait handle is signalled, and then the callback is called. See how Jeffrey Richter's example does it. However, you probably should try to avoid assumptions about the order in which these three methods (polling, wait handle, callback) might detect the completion, because it's possible to implement them in a different order than expected.
4: I can't help you there, except that you might find the answer by debugging into the framework source code for the implementation you're curious about. Or you could probably come up with an experiment to find out... or set up a good experiment and debug into the framework source to be really sure.