Is there any performance boost by using Lamda-expressions in C#? - c#

I would like to know if there is any difference in the performance by doing the following:
void Example(string message)
{
System.Console.WriteLine(message);
}
or
void Example(string message)
{
ConsoleMessage(message);
}
void ConsoleMessage(string message) => System.Console.WriteLine(message);

You can check how it compiles on this website - SharpLab. As you can see
void ConsoleMessage(string message) => System.Console.WriteLine(message);
is compiled to another method. The only difference is one more method call when using Example2. You can read more about cost of method calls in this post - How expensive are method calls in .net
It's very, very unlikely to be your bottleneck though. As always, write the most readable code you can first, and then benchmark it to see whether it performs well enough. If it doesn't, use a profiler to find the hotspots which may be worth micro-optimising.

I wrote that
static void ConsoleMessage(string message) => Console.WriteLine(message);
static void ExampleIndirect(string message)
{
ConsoleMessage(message);
}
static void ExampleDirect(string message)
{
Console.WriteLine(message);
}
By using JustDecompile on the release exe, I found that :
private static void ConsoleMessage(string message)
{
Console.WriteLine(message);
}
private static void ExampleDirect(string message)
{
Console.WriteLine(message);
}
private static void ExampleIndirect(string message)
{
Program.ConsoleMessage(message);
}
So I'm not so sure that it "is compiled to another method".
However, the difference seems not to be noticeable on benchmarks.
EDIT :
Results of my benchmark :
Iterations / Direct Time (ms) / Indirect time (ms)
10 1 0
5000 154 148
50000 1514 1502
100000 3025 3019
500000 15191 15150
1000000 30362 30276
And the code :
static void Loop(int times)
{
Stopwatch sw = Stopwatch.StartNew();
writer.Write(times.ToString() + "\t");
for (int i = 0; i < times; i++)
ExampleDirect("Test " + i.ToString());
writer.Write(sw.ElapsedMilliseconds.ToString() + "\t");
sw.Restart();
for (int i = 0; i < times; i++)
ExampleIndirect("Test " + i.ToString());
writer.Write(sw.ElapsedMilliseconds.ToString() + Environment.NewLine);
}

Related

Executing 2 methods parallel using C#

I have a scenario to monitor the state of the ui test. If the test is running more than 30 mins, stop the test run and start with another test. Here is the code developed to simulate the same. I apologies if i am duplicating here.
Reference: execute mutiple object methods parallel
Here is the sample program that, developed in line with my requirement.I request experts to comment on my approach and suggest me the best of it.
<code>
namespace ParallelTasksExample
{
internal class Program
{
private static Stopwatch testMonitor;
private static int timeElapsed;
private static void Main(string[] args)
{
Parallel.Invoke(() => PrintNumber(), () => MonitorSequence());
}
private static void PrintNumber()
{
testMonitor = new Stopwatch();
testMonitor.Start();
for (int i = 0; i <= 10; i++)
{
System.Threading.Thread.Sleep(5000);
timeElapsed = testMonitor.Elapsed.Seconds;
Console.WriteLine("Running since :" + timeElapsed + " seconds");
}
}
private static void MonitorSequence()
{
while (timeElapsed < 25)
{
System.Threading.Thread.Sleep(2000);
Console.WriteLine("Time Elapsed :" + timeElapsed + " seconds");
}
testMonitor.Stop();
testMonitor.Reset();
Console.WriteLine("Test has taken more then 25 seconds. Closing the current test sequence initiated.");
}
}
}
</code>
I am facing an issue, when the actual code developed based on the above example.
task 1 is completed and task 2 is in progress. Meanwhile task 1 is waiting for task2 to finish. How can we make both tasks are independent?

recursion acceleration after 10,000

i'm running this code:
public static void func(int i)
{
Console.WriteLine(i);
func(i + 1);
}
static void Main(string[] args)
{
func(0);
}
obviously it causes StackOverflowException, but something weird happens: From i = 0 to i = 10,000 it runs pretty slow, (about 13 seconds on my computer, using visual studio 2015) but from 10,000 to 20,000 it's almost immediately (about 1 second). Why is why is it happening?
Thanks.
Did you define the buffer size of your console window to be 10,000 lines? The WriteLine is the slowest part in your code. The console window seems to be faster once it reached the maximum number of lines.
I ran this piece of code in tutorialspoint's online c# compiler:
using System.IO;
using System;
using System.Diagnostics;
class Program
{
static Stopwatch stopwatch = new Stopwatch();
static TextWriter tw = new StreamWriter("Result.txt");
public static void func(int i)
{
if (i > 40000)
return;
tw.WriteLine(stopwatch.ElapsedMilliseconds + " " + i);
tw.Flush();
func(i + 1);
}
static void Main(string[] args)
{
stopwatch.Start();
func(0);
stopwatch.Stop();
tw.Close();
}
}
The result I got is the this
As you can see there are a few jumps in the time, specifically at 20461 and 20548. I don't know what to make of this though...
Maybe there is something about the Visual Studio configuration or something specific to your machine. When I run your code in release config it gets executed within 3 milliseconds. In debug mode the "i" never passes 9000 ...
class Program
{
public static void func(int i)
{
if (i % 1000 == 0)
Console.WriteLine(DateTime.Now.ToString("HH:mm:ss:ff") + " => i : " + i.ToString());
func(i + 1);
}
static void Main(string[] args)
{
func(0);
Console.Read();
}
}

Writing to console char by char, fastest way

In a current project of mine I have to parse a string, and write parts of it to the console. While testing how to do this without too much overhead, I discovered that one way I was testing is actually faster than Console.WriteLine, which is slightly confusing to me.
I'm aware this is not the proper way to benchmark stuff, but I'm usually fine with a rough "this is faster than this", which I can tell after running it a few times.
static void Main(string[] args)
{
var timer = new Stopwatch();
timer.Restart();
Test1("just a little test string.");
timer.Stop();
Console.WriteLine(timer.Elapsed);
timer.Restart();
Test2("just a little test string.");
timer.Stop();
Console.WriteLine(timer.Elapsed);
timer.Restart();
Test3("just a little test string.");
timer.Stop();
Console.WriteLine(timer.Elapsed);
}
static void Test1(string str)
{
Console.WriteLine(str);
}
static void Test2(string str)
{
foreach (var c in str)
Console.Write(c);
Console.Write('\n');
}
static void Test3(string str)
{
using (var stream = new StreamWriter(Console.OpenStandardOutput()))
{
foreach (var c in str)
stream.Write(c);
stream.Write('\n');
}
}
As you can see, Test1 is using Console.WriteLine. My first thought was to simply call Write for every char, see Test2. But this resulted in taking roughly twice as long. My guess would be that it flushes after every write, which makes it slower. So I tried Test3, using a StreamWriter (AutoFlush off), which resulted in being about 25% faster than Test1, and I'm really curious why that is. Or is it that writing to the console can't be benchmarked properly? (noticed some strange data when adding more test cases...)
Can someone enlighten me?
Also, if there's a better way to do this (going though a string and only writing parts of it to the console), feel free to comment on that.
First I agree with the other comments that your test harness leaves something to be desired... I rewrote it and included it below. The result after rewrite post a clear winner:
//Test 1 = 00:00:03.7066514
//Test 2 = 00:00:24.6765818
//Test 3 = 00:00:00.8609692
From this you are correct the buffered stream writer is better than 25% faster. It's faster only because it's buffered. Internally the StreamWriter implementation uses a default buffer size of around 1~4kb (depending on the stream type). If you construct the StreamWriter with an 8-byte buffer (the smallest allowed) you will see most of your performance improvement disappear. You can also see this by using a Flush() call following each write.
Here is the test rewritten to obtain the numbers above:
private static StreamWriter stdout = new StreamWriter(Console.OpenStandardOutput());
static void Main(string[] args)
{
Action<string>[] tests = new Action<string>[] { Test1, Test2, Test3 };
TimeSpan[] timming = new TimeSpan[tests.Length];
// Repeat the entire sequence of tests many times to accumulate the result
for (int i = 0; i < 100; i++)
{
for( int itest =0; itest < tests.Length; itest++)
{
string text = String.Format("just a little test string, test = {0}, iteration = {1}", itest, i);
Action<string> thisTest = tests[itest];
//Clear the console so that each test begins from the same state
Console.Clear();
var timer = Stopwatch.StartNew();
//Repeat the test many times, if this was not using the console
//I would use a much higher number, say 10,000
for (int j = 0; j < 100; j++)
thisTest(text);
timer.Stop();
//Accumulate the result, but ignore the first run
if (i != 0)
timming[itest] += timer.Elapsed;
//Depending on what you are benchmarking you may need to force GC here
}
}
//Now print the results we have collected
Console.Clear();
for (int itest = 0; itest < tests.Length; itest++)
Console.WriteLine("Test {0} = {1}", itest + 1, timming[itest]);
Console.ReadLine();
}
static void Test1(string str)
{
Console.WriteLine(str);
}
static void Test2(string str)
{
foreach (var c in str)
Console.Write(c);
Console.Write('\n');
}
static void Test3(string str)
{
foreach (var c in str)
stdout.Write(c);
stdout.Write('\n');
}
I've ran your test for 10000 times each and the results are the following on my machine:
test1 - 0.6164241
test2 - 8.8143273
test3 - 0.9537039
this is the script I used:
static void Main(string[] args)
{
Test1("just a little test string."); // warm up
GC.Collect(); // compact Heap
GC.WaitForPendingFinalizers(); // and wait for the finalizer queue to empty
Stopwatch timer = new Stopwatch();
timer.Start();
for (int i = 0; i < 10000; i++)
{
Test1("just a little test string.");
}
timer.Stop();
Console.WriteLine(timer.Elapsed);
}
I changed the code to run each test 1000 times.
static void Main(string[] args) {
var timer = new Stopwatch();
timer.Restart();
for (int i = 0; i < 1000; i++)
Test1("just a little test string.");
timer.Stop();
TimeSpan elapsed1 = timer.Elapsed;
timer.Restart();
for (int i = 0; i < 1000; i++)
Test2("just a little test string.");
timer.Stop();
TimeSpan elapsed2 = timer.Elapsed;
timer.Restart();
for (int i = 0; i < 1000; i++)
Test3("just a little test string.");
timer.Stop();
TimeSpan elapsed3 = timer.Elapsed;
Console.WriteLine(elapsed1);
Console.WriteLine(elapsed2);
Console.WriteLine(elapsed3);
Console.Read();
}
My output:
00:00:05.2172738
00:00:09.3893525
00:00:05.9624869
I also ran this one 10000 times and got these results:
00:00:00.6947374
00:00:09.6185047
00:00:00.8006468
Which seems in keeping with what others observed. I was curious why Test3 was slower than Test1, so wrote a fourth test:
timer.Start();
using (var stream = new StreamWriter(Console.OpenStandardOutput()))
{
for (int i = 0; i < testSize; i++)
{
Test4("just a little test string.", stream);
}
}
timer.Stop();
This one reuses the stream for each test, thus avoiding the overhead of recreating it each time. Result:
00:00:00.4090399
Although this is the fastest, it writes all the output at the end of the using block, which may not be what you are after. I would imagine that this approach would chew up more memory as well.

Am I undermining the efficiency of StringBuilder?

I've started using StringBuilder in preference to straight concatenation, but it seems like it's missing a crucial method. So, I implemented it myself, as an extension:
public void Append(this StringBuilder stringBuilder, params string[] args)
{
foreach (string arg in args)
stringBuilder.Append(arg);
}
This turns the following mess:
StringBuilder sb = new StringBuilder();
...
sb.Append(SettingNode);
sb.Append(KeyAttribute);
sb.Append(setting.Name);
Into this:
sb.Append(SettingNode, KeyAttribute, setting.Name);
I could use sb.AppendFormat("{0}{1}{2}",..., but this seems even less preferred, and still harder to read. Is my extension a good method, or does it somehow undermine the benefits of StringBuilder? I'm not trying to prematurely optimize anything, as my method is more about readability than speed, but I'd also like to know I'm not shooting myself in the foot.
I see no problem with your extension. If it works for you it's all good.
I myself prefere:
sb.Append(SettingNode)
.Append(KeyAttribute)
.Append(setting.Name);
Questions like this can always be answered with a simple test case.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace SBTest
{
class Program
{
private const int ITERATIONS = 1000000;
private static void Main(string[] args)
{
Test1();
Test2();
Test3();
}
private static void Test1()
{
var sw = Stopwatch.StartNew();
var sb = new StringBuilder();
for (var i = 0; i < ITERATIONS; i++)
{
sb.Append("TEST" + i.ToString("00000"),
"TEST" + (i + 1).ToString("00000"),
"TEST" + (i + 2).ToString("00000"));
}
sw.Stop();
Console.WriteLine("Testing Append() extension method...");
Console.WriteLine("--------------------------------------------");
Console.WriteLine("Test 1 iterations: {0:n0}", ITERATIONS);
Console.WriteLine("Test 1 milliseconds: {0:n0}", sw.ElapsedMilliseconds);
Console.WriteLine("Test 1 output length: {0:n0}", sb.Length);
Console.WriteLine("");
}
private static void Test2()
{
var sw = Stopwatch.StartNew();
var sb = new StringBuilder();
for (var i = 0; i < ITERATIONS; i++)
{
sb.Append("TEST" + i.ToString("00000"));
sb.Append("TEST" + (i+1).ToString("00000"));
sb.Append("TEST" + (i+2).ToString("00000"));
}
sw.Stop();
Console.WriteLine("Testing multiple calls to Append() built-in method...");
Console.WriteLine("--------------------------------------------");
Console.WriteLine("Test 2 iterations: {0:n0}", ITERATIONS);
Console.WriteLine("Test 2 milliseconds: {0:n0}", sw.ElapsedMilliseconds);
Console.WriteLine("Test 2 output length: {0:n0}", sb.Length);
Console.WriteLine("");
}
private static void Test3()
{
var sw = Stopwatch.StartNew();
var sb = new StringBuilder();
for (var i = 0; i < ITERATIONS; i++)
{
sb.AppendFormat("{0}{1}{2}",
"TEST" + i.ToString("00000"),
"TEST" + (i + 1).ToString("00000"),
"TEST" + (i + 2).ToString("00000"));
}
sw.Stop();
Console.WriteLine("Testing AppendFormat() built-in method...");
Console.WriteLine("--------------------------------------------");
Console.WriteLine("Test 3 iterations: {0:n0}", ITERATIONS);
Console.WriteLine("Test 3 milliseconds: {0:n0}", sw.ElapsedMilliseconds);
Console.WriteLine("Test 3 output length: {0:n0}", sb.Length);
Console.WriteLine("");
}
}
public static class SBExtentions
{
public static void Append(this StringBuilder sb, params string[] args)
{
foreach (var arg in args)
sb.Append(arg);
}
}
}
On my PC, the output is:
Testing Append() extension method...
--------------------------------------------
Test 1 iterations: 1,000,000
Test 1 milliseconds: 1,080
Test 1 output length: 29,700,006
Testing multiple calls to Append() built-in method...
--------------------------------------------
Test 2 iterations: 1,000,000
Test 2 milliseconds: 1,001
Test 2 output length: 29,700,006
Testing AppendFormat() built-in method...
--------------------------------------------
Test 3 iterations: 1,000,000
Test 3 milliseconds: 1,124
Test 3 output length: 29,700,006
So your extension method is only slightly slower than the Append() method and is slightly faster than the AppendFormat() method, but in all 3 cases, the difference is entirely too trivial to worry about. Thus, if your extension method enhances the readability of your code, use it!
It's a little bit of overhead creating the extra array, but I doubt that it's a lot. You should measure
If it turns out that the overhead of creating string arrays is significant, you can mitigate it by having several overloads - one for two parameters, one for three, one for four etc... so that only when you get to a higher number of parameters (e.g. six or seven) will it need to create the array. The overloads would be like this:
public void Append(this builder, string item1, string item2)
{
builder.Append(item1);
builder.Append(item2);
}
public void Append(this builder, string item1, string item2, string item3)
{
builder.Append(item1);
builder.Append(item2);
builder.Append(item3);
}
public void Append(this builder, string item1, string item2,
string item3, string item4)
{
builder.Append(item1);
builder.Append(item2);
builder.Append(item3);
builder.Append(item4);
}
// etc
And then one final overload using params, e.g.
public void Append(this builder, string item1, string item2,
string item3, string item4, params string[] otherItems)
{
builder.Append(item1);
builder.Append(item2);
builder.Append(item3);
builder.Append(item4);
foreach (string item in otherItems)
{
builder.Append(item);
}
}
I'd certainly expect these (or just your original extension method) to be faster than using AppendFormat - which needs to parse the format string, after all.
Note that I didn't make these overloads call each other pseudo-recursively - I suspect they'd be inlined, but if they weren't the overhead of setting up a new stack frame etc could end up being significant. (We're assuming the overhead of the array is significant, if we've got this far.)
Other than a bit of overhead, I don't personally see any issues with it. Definitely more readable. As long as you're passing a reasonable number of params in I don't see the problem.
From a clarity perspective, your extension is ok.
It would probably be best to simply use the .append(x).append(y).append(z) format if you never have more than about 5 or 6 items.
StringBuilder itself would only net you a performance gain if you were processing many thousands of items. In addition you'll be creating the array every time you call the method.
So if you're doing it for clarity, that's ok. If you're doing it for efficiency, then you're probably on the wrong track.
I wouldn't say you're undermining it's efficiency, but you may be doing something inefficient when a more efficient method is available. AppendFormat is what I think you want here. If the {0}{1}{2} string being used constantly is too ugly, I tend to put my format strings in consts above, so the look would be more or less the same as your extension.
sb.AppendFormat(SETTING_FORMAT, var1, var2, var3);
I haven't tested recently, but in the past, StringBuilder was actually slower than plain-vanilla string concatenation ("this " + "that") until you get to about 7 concatenations.
If this is string concatenation that is not happening in a loop, you may want to consider if you should be using the StringBuilder at all. (In a loop, I start to worry about allocations with plain-vanilla string concatenation, since strings are immutable.)
Potentially even faster, because it performs at most one reallocation/copy step, for many appends.
public void Append(this StringBuilder stringBuilder, params string[] args)
{
int required = stringBuilder.Length;
foreach (string arg in args)
required += arg.Length;
if (stringBuilder.Capacity < required)
stringBuilder.Capacity = required;
foreach (string arg in args)
stringBuilder.Append(arg);
}
Ultimately it comes down to which one results in less string creation. I have a feeling that the extension will result in a higher string count that using the string format. But the performance probably won't be that different.
Chris,
Inspired by this Jon Skeet response (second answer), I slightly rewrote your code. Basically, I added the TestRunner method which runs the passed-in function and reports the elapsed time, eliminating a little redundant code. Not to be smug, but rather as a programming exercise for myself. I hope it's helpful.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace SBTest
{
class Program
{
private static void Main(string[] args)
{
// JIT everything
AppendTest(1);
AppendFormatTest(1);
int iterations = 1000000;
// Run Tests
TestRunner(AppendTest, iterations);
TestRunner(AppendFormatTest, iterations);
Console.ReadLine();
}
private static void TestRunner(Func<int, long> action, int iterations)
{
GC.Collect();
var sw = Stopwatch.StartNew();
long length = action(iterations);
sw.Stop();
Console.WriteLine("--------------------- {0} -----------------------", action.Method.Name);
Console.WriteLine("iterations: {0:n0}", iterations);
Console.WriteLine("milliseconds: {0:n0}", sw.ElapsedMilliseconds);
Console.WriteLine("output length: {0:n0}", length);
Console.WriteLine("");
}
private static long AppendTest(int iterations)
{
var sb = new StringBuilder();
for (var i = 0; i < iterations; i++)
{
sb.Append("TEST" + i.ToString("00000"),
"TEST" + (i + 1).ToString("00000"),
"TEST" + (i + 2).ToString("00000"));
}
return sb.Length;
}
private static long AppendFormatTest(int iterations)
{
var sb = new StringBuilder();
for (var i = 0; i < iterations; i++)
{
sb.AppendFormat("{0}{1}{2}",
"TEST" + i.ToString("00000"),
"TEST" + (i + 1).ToString("00000"),
"TEST" + (i + 2).ToString("00000"));
}
return sb.Length;
}
}
public static class SBExtentions
{
public static void Append(this StringBuilder sb, params string[] args)
{
foreach (var arg in args)
sb.Append(arg);
}
}
}
Here's the output:
--------------------- AppendTest -----------------------
iterations: 1,000,000
milliseconds: 1,274
output length: 29,700,006
--------------------- AppendFormatTest -----------------------
iterations: 1,000,000
milliseconds: 1,381
output length: 29,700,006

What is the difference between calling a delegate directly, using DynamicInvoke, and using DynamicInvokeImpl?

The docs for both DynamicInvoke and DynamicInvokeImpl say:
Dynamically invokes (late-bound) the
method represented by the current
delegate.
I notice that DynamicInvoke and DynamicInvokeImpl take an array of objects instead of a specific list of arguments (which is the late-bound part I'm guessing). But is that the only difference? And what is the difference between DynamicInvoke and DynamicInvokeImpl.
The main difference between calling it directly (which is short-hand for Invoke(...)) and using DynamicInvoke is performance; a factor of more than *700 by my measure (below).
With the direct/Invoke approach, the arguments are already pre-validated via the method signature, and the code already exists to pass those into the method directly (I would say "as IL", but I seem to recall that the runtime provides this directly, without any IL). With DynamicInvoke it needs to check them from the array via reflection (i.e. are they all appropriate for this call; do they need unboxing, etc); this is slow (if you are using it in a tight loop), and should be avoided where possible.
Example; results first (I increased the LOOP count from the previous edit, to give a sensible comparison):
Direct: 53ms
Invoke: 53ms
DynamicInvoke (re-use args): 37728ms
DynamicInvoke (per-cal args): 39911ms
With code:
static void DoesNothing(int a, string b, float? c) { }
static void Main() {
Action<int, string, float?> method = DoesNothing;
int a = 23;
string b = "abc";
float? c = null;
const int LOOP = 5000000;
Stopwatch watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method(a, b, c);
}
watch.Stop();
Console.WriteLine("Direct: " + watch.ElapsedMilliseconds + "ms");
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.Invoke(a, b, c);
}
watch.Stop();
Console.WriteLine("Invoke: " + watch.ElapsedMilliseconds + "ms");
object[] args = new object[] { a, b, c };
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.DynamicInvoke(args);
}
watch.Stop();
Console.WriteLine("DynamicInvoke (re-use args): "
+ watch.ElapsedMilliseconds + "ms");
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.DynamicInvoke(a,b,c);
}
watch.Stop();
Console.WriteLine("DynamicInvoke (per-cal args): "
+ watch.ElapsedMilliseconds + "ms");
}
Coincidentally I have found another difference.
If Invoke throws an exception it can be caught by the expected exception type.
However DynamicInvoke throws a TargetInvokationException. Here is a small demo:
using System;
using System.Collections.Generic;
namespace DynamicInvokeVsInvoke
{
public class StrategiesProvider
{
private readonly Dictionary<StrategyTypes, Action> strategies;
public StrategiesProvider()
{
strategies = new Dictionary<StrategyTypes, Action>
{
{StrategyTypes.NoWay, () => { throw new NotSupportedException(); }}
// more strategies...
};
}
public void CallStrategyWithDynamicInvoke(StrategyTypes strategyType)
{
strategies[strategyType].DynamicInvoke();
}
public void CallStrategyWithInvoke(StrategyTypes strategyType)
{
strategies[strategyType].Invoke();
}
}
public enum StrategyTypes
{
NoWay = 0,
ThisWay,
ThatWay
}
}
While the second test goes green, the first one faces a TargetInvokationException.
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SharpTestsEx;
namespace DynamicInvokeVsInvoke.Tests
{
[TestClass]
public class DynamicInvokeVsInvokeTests
{
[TestMethod]
public void Call_strategy_with_dynamic_invoke_can_be_catched()
{
bool catched = false;
try
{
new StrategiesProvider().CallStrategyWithDynamicInvoke(StrategyTypes.NoWay);
}
catch(NotSupportedException exc)
{
/* Fails because the NotSupportedException is wrapped
* inside a TargetInvokationException! */
catched = true;
}
catched.Should().Be(true);
}
[TestMethod]
public void Call_strategy_with_invoke_can_be_catched()
{
bool catched = false;
try
{
new StrategiesProvider().CallStrategyWithInvoke(StrategyTypes.NoWay);
}
catch(NotSupportedException exc)
{
catched = true;
}
catched.Should().Be(true);
}
}
}
Really there is no functional difference between the two. if you pull up the implementation in reflector, you'll notice that DynamicInvoke just calls DynamicInvokeImpl with the same set of arguments. No extra validation is done and it's a non-virtual method so there is no chance for it's behavior to be changed by a derived class. DynamicInvokeImpl is a virtual method where all of the actual work is done.

Categories

Resources