I believe Microsoft claims that generics is faster than using plain polymorphism when dealing with reference types. However the following simple test (64bit VS2012) would indicate otherwise. I typically get 10% faster stopwatch times using polymorphism. Am I misinterpreting the results?
public interface Base { Int64 Size { get; } }
public class Derived : Base { public Int64 Size { get { return 10; } } }
public class GenericProcessor<TT> where TT : Base
{
private Int64 sum;
public GenericProcessor(){ sum = 0; }
public void process(TT o){ sum += o.Size; }
public Int64 Sum { get { return sum; } }
}
public class PolymorphicProcessor
{
private Int64 sum;
public PolymorphicProcessor(){ sum = 0; }
public void process(Base o){ sum += o.Size; }
public Int64 Sum { get { return sum; } }
}
static void Main(string[] args)
{
var generic_processor = new GenericProcessor<Derived>();
var polymorphic_processor = new PolymorphicProcessor();
Stopwatch sw = new Stopwatch();
int N = 100000000;
var derived = new Derived();
sw.Start();
for (int i = 0; i < N; ++i) generic_processor.process(derived);
sw.Stop();
Console.WriteLine("Sum ="+generic_processor.Sum + " Generic performance = " + sw.ElapsedMilliseconds + " millisec");
sw.Restart();
sw.Start();
for (int i = 0; i < N; ++i) polymorphic_processor.process(derived);
sw.Stop();
Console.WriteLine("Sum ="+polymorphic_processor.Sum+ " Poly performance = " + sw.ElapsedMilliseconds + " millisec");
Even more surprising (and confusing) is that if I add a type cast to the polymorphic version of processor as follows, it then runs consistently ~20% faster than the generic version.
public void process(Base trade)
{
sum += ((Derived)trade).Size; // cast not needed - just an experiment
}
What's going on here? I understand generics can help avoid costly boxing and unboxing when dealing with primitive types, but I'm dealing strictly with reference types here.
Execute the test under .NET 4.5 x64 with Ctrl-F5 (without debugger). Also with N increased by 10x. That way the results reliably reproduce, no matter what order the tests are in.
With generics on ref types you still get the same vtable/interface lookup because there's just one compiled method for all ref types. There's no specialization for Derived. Performance of executing the callvirt should be the same based on this.
Furthermore, generic methods have a hidden method argument that is typeof(T) (because this allows you to actually write typeof(T) in generic code!). This is additional overhead explaining why the generic version is slower.
Why is the cast faster than the interface call? The cast is just a pointer compare and a perfectly predictable branch. After the cast the concrete type of the object is known, allowing for a faster call.
if (trade.GetType() != typeof(Derived)) throw;
Derived.Size(trade); //calling directly the concrete method, potentially inlining it
All of this is educated guessing. Validate by looking at the disassembly.
If you add the cast you get the following assembly:
My assembly skills are not enough to fully decode this. However:
16 loads the vtable ptr of Derived
22 and #25 are the branch to test the vtable. This completes the cast.
at #32 the cast is done. Note, that following this point there's no call. Size was inlined.
35 a lea implements the add
39 store back to this.sum
The same trick works with the generic version (((Derived)(Base)o).Size).
I believe Servy was correct it is a problem with your test. I reversed the order of the tests (just a hunch):
internal class Program
{
public interface Base
{
Int64 Size { get; }
}
public class Derived : Base
{
public Int64 Size
{
get
{
return 10;
}
}
}
public class GenericProcessor<TT>
where TT : Base
{
private Int64 sum;
public GenericProcessor()
{
sum = 0;
}
public void process(TT o)
{
sum += o.Size;
}
public Int64 Sum
{
get
{
return sum;
}
}
}
public class PolymorphicProcessor
{
private Int64 sum;
public PolymorphicProcessor()
{
sum = 0;
}
public void process(Base o)
{
sum += o.Size;
}
public Int64 Sum
{
get
{
return sum;
}
}
}
private static void Main(string[] args)
{
var generic_processor = new GenericProcessor<Derived>();
var polymorphic_processor = new PolymorphicProcessor();
Stopwatch sw = new Stopwatch();
int N = 100000000;
var derived = new Derived();
sw.Start();
for (int i = 0; i < N; ++i) polymorphic_processor.process(derived);
sw.Stop();
Console.WriteLine(
"Sum =" + polymorphic_processor.Sum + " Poly performance = " + sw.ElapsedMilliseconds + " millisec");
sw.Restart();
sw.Start();
for (int i = 0; i < N; ++i) generic_processor.process(derived);
sw.Stop();
Console.WriteLine(
"Sum =" + generic_processor.Sum + " Generic performance = " + sw.ElapsedMilliseconds + " millisec");
Console.Read();
}
}
In this case the polymorphic is slower in my tests. This shows that the first test is significantly slower than the second test. It could be loading classes the first time, preemptions, who knows ...
I just want to note that I am not arguing that generics are faster or as fast. I'm simply trying to prove that these kinds of tests don't make a case one way or the other.
Related
I see that there is a similar question for C++. Does anyone know why this method works when the method is non-generic, but as soon as I make it generic, the random number portion of code fails?
Error: Cannot implicitly convert type int to 'T'. If I can't use generics, I will have to rewrite the same function over and over for each different length of array.
public void fillGenericArray<T>(T[] inputArray) where T : IComparable
{
var randomNumb1 = new Random();
for (int i = 0; i < inputArray.Length - 1; i++)
{
Console.WriteLine($"{inputArray[i] = randomNumb1.Next(1, 501)},");
}
}
I had to look twice at this, but here's the issue:
Because inputArray is an 'array of type T'
then even though i is an int the expression
inputArray[i]
returns a type T not a type int.
And so, conversely, a type T must be assigned to it.
A generic method like this might achieve your goal:
public static void fillGenericArray<T>(T[] inputArray)
{
for (int i = 0; i < inputArray.Length; i++)
{
// Where T has a CTor that takes an int as an argument
inputArray[i] = (T)Activator.CreateInstance(typeof(T), Random.Next(1, 501));
}
}
(Thanks to this SO post for refreshing my memory about instantiating T with arguments.)
You could also use Enumerable.Range() to get the same result without writing a method at all:
// Generically, for any 'SomeClass' with a CTor(int value)
SomeClass[] arrayOfT =
Enumerable.Range(1, LENGTH).Select(i => new SomeClass(Random.Next(1, 501)))
.ToArray();
(Slightly Modified with help from this SO post) - see the answer using Enumerable.Range().
Here is a test runner:
class Program
{
static Random Random { get; } = new Random();
const int LENGTH = 10;
static void Main(string[] args)
{
Console.WriteLine();
Console.WriteLine("With a generic you could do this...");
SomeClass[] arrayOfT;
arrayOfT = new SomeClass[LENGTH];
fillGenericArray<SomeClass>(arrayOfT);
Console.WriteLine(string.Join(Environment.NewLine, arrayOfT.Select(field=>field.Value)));
Console.WriteLine();
Console.WriteLine("But perhaps it's redundant, because Enumerable is already Generic!");
arrayOfT = Enumerable.Range(1, LENGTH).Select(i => new SomeClass(Random.Next(1, 501))).ToArray();
Console.WriteLine(string.Join(Environment.NewLine, arrayOfT.Select(field => field.Value)));
// Pause
Console.WriteLine(Environment.NewLine + "Any key to exit");
Console.ReadKey();
}
public static void fillGenericArray<T>(T[] inputArray)
{
for (int i = 0; i < inputArray.Length; i++)
{
inputArray[i] = (T)Activator.CreateInstance(typeof(T), Random.Next(1, 501));
}
}
class SomeClass
{
public SomeClass(int value)
{
Value = value;
}
public int Value { get; set; }
}
}
Clone or Download this example from GitHub.
There is no reason to use generics. Just replace T with int and you will have function that does what you want (based on your question and comment below it).
EDIT: From your comment it seems you misunderstand the purpose of generics. The non-generic function WILL work for all lengths of the array.
And to answer why the change to generics fails. You are trying to assign int to generic type T which can be anything and compiler will not allow such a cast.
This question already has answers here:
String.Format vs "string" + "string" or StringBuilder? [duplicate]
(2 answers)
Closed 6 years ago.
I have a program that will write a series of files in a loop. The filename is constructed using a parameter from an object supplied to the method.
ANTS Performance Profiler says this is dog slow and I'm not sure why:
public string CreateFilename(MyObject obj)
{
return "sometext-" + obj.Name + ".txt";
}
Is there a more performant way of doing this? The method is hit thousands of times and I don't know of a good way outside of having a discrete method for this purpose since the input objects are out of my control and regularly change.
The compiler will optimize your two concats into one call to:
String.Concat("sometext-", obj.Name, ".txt")
There is no faster way to do this.
If you instead compute the filename within the class itself, it will run much faster, in exchange for decreased performance when modifying the object. Mind you, I'd be very concerned if computing a filename was a bottleneck; writing to the file is way slower than coming up with its name.
See code samples below. When I benchmarked them with optimizations on (in LINQPad 5), Test2 ran about 15x faster than Test1. Among other things, Test2 doesn't constantly generate/discard tiny string objects.
void Main()
{
Test1();
Test1();
Test1();
Test2();
Test2();
Test2();
}
void Test1()
{
System.Diagnostics.Stopwatch sw = new Stopwatch();
MyObject1 mo = new MyObject1 { Name = "MyName" };
sw.Start();
long x = 0;
for (int i = 0; i < 10000000; ++i)
{
x += CreateFileName(mo).Length;
}
Console.WriteLine(x); //Sanity Check, prevent clever compiler optimizations
sw.ElapsedMilliseconds.Dump("Test1");
}
public string CreateFileName(MyObject1 obj)
{
return "sometext-" + obj.Name + ".txt";
}
void Test2()
{
System.Diagnostics.Stopwatch sw = new Stopwatch();
MyObject2 mo = new MyObject2 { Name = "MyName" };
sw.Start();
long x = 0;
for (int i = 0; i < 10000000; ++i)
{
x += mo.FileName.Length;
}
Console.WriteLine(x); //Sanity Check, prevent clever compiler optimizations
sw.ElapsedMilliseconds.Dump("Test2");
}
public class MyObject1
{
public string Name;
}
public class MyObject2
{
public string FileName { get; private set;}
private string _name;
public string Name
{
get
{
return _name;
}
set
{
_name=value;
FileName = "sometext-" + _name + ".txt";
}
}
}
I also tested adding memoization to CreateFileName, but it barely improved performance over Test1, and it couldn't possibly beat out Test2, since it performs the equivalent steps with additional overhead for hash lookups.
I have a requirement where:
1. I need to store objects of any type in list
2. Avoid casting calls as much as possible
To that end I tried to come up with something. No matter what I tried I could not get rid of boxing\unboxing. I wanted to know whether any of you have come across something that will achieve it.
The class I have created is mostly useless unless you are dealing with small collections because in terms of memory and performance it takes 1.5 times ArrayList. I am trying to find ways to improve at least one of them as well (preferably performance).
Any feedback is appreciated.
public class Castable
{
Object _o;
public override bool Equals(object obj) { return base.Equals(obj); }
public override int GetHashCode() { return base.GetHashCode(); }
public bool Equals<T>(T obj)
{
T v1 = (T)this._o;
//T v2 = obj;
//var v2 = obj; // Convert.ChangeType(obj, obj.GetType());
// This doesn't work.. (Cannot convert T to Castable
//var v2 = Convert.ChangeType(this.GetType() == obj.GetType() ?
//((Castable)obj)._o.GetType(), obj.GetType());
//if (((T)this._o) != obj) //<== why this doesn't work?
//if (v1 == obj) //<== "Operator '==' cannot be applied to operands of type 'T' and 'T'"
if(v1.Equals(obj))
{
return true;
}
return false;
}
public bool Equals(Castable obj)
{
var v = Convert.ChangeType(obj._o, obj._o.GetType());
return Equals(v);
}
public static bool operator ==(Castable a, Castable b)
{
return a.Equals(b);
}
public static bool operator !=(Castable a, Castable b)
{
return !a.Equals(b);
}
#region HOW CAN WE USE GENRIC TYPE FOR == and != OPERATOR?
public static bool operator ==(Castable a, object b)
{
return a.Equals(b);
}
public static bool operator !=(Castable a, object b)
{
return !a.Equals(b);
}
#endregion
public void Set<T>(T t) { _o = t; }
public T Get<T>() { return (T)_o; }
public static long TestLookup(IList list, int elements, int lookups)
{
object value;
Stopwatch watch = new Stopwatch();
watch.Start();
for (long index = 0; index < lookups; ++index)
{
value = list[random.Next(0, elements - 1)];
}
watch.Stop();
return watch.ElapsedMilliseconds;
}
public static long TestCompare(IList list, int elements, int lookups)
{
//object value;
bool match;
Stopwatch watch = new Stopwatch();
watch.Start();
for (long index = 0; index < lookups; ++index)
{
match = random.Next() == (int)list[random.Next(0, elements - 1)];
}
watch.Stop();
return watch.ElapsedMilliseconds;
}
public static long TestCompareCastable(IList<Castable> list, int elements, int lookups)
{
//object value;
bool match;
Stopwatch watch = new Stopwatch();
watch.Start();
for (long index = 0; index < lookups; ++index)
{
match = list[random.Next(0, elements - 1)] == random.Next(); //most of the times 1.4 times
//match = list[random.Next(0, elements - 1)].Equals(random.Next()); // may be 1.3 times ArrayList
}
watch.Stop();
return watch.ElapsedMilliseconds;
}
public static void Test(int elements, int lookups, int times)
{
List<int> intList = new List<int>();
List<Castable> castableList = new List<Castable>();
ArrayList intArrayList = new ArrayList();
if (Stopwatch.IsHighResolution)
Console.WriteLine("We have a high resolution timer available");
long frequency = Stopwatch.Frequency;
Console.WriteLine(" Timer frequency in ticks per second = {0}", frequency);
for (int index = 0; index < elements; ++index)
{
intList.Add(random.Next());
intArrayList.Add(random.Next());
Castable c = new Castable();
c.Set(random.Next());
castableList.Add(c);
}
long ms = 0;
string result = "";
string ratios = "";
for (int time = 0; time < times; ++time)
{
ms = TestLookup(intList, elements, lookups);
result += "intList Lookup Time " + ms.ToString() + " MS\n";
ms = TestLookup(castableList, elements, lookups);
result += "intArrayList Lookup Time " + ms.ToString() + " MS\n";
ms = TestLookup(intArrayList, elements, lookups);
result += "castableList Lookup Time " + ms.ToString() + " MS\n";
ms = TestCompare(intList, elements, lookups);
result += "intList Compare Time " + ms.ToString() + " MS\n";
long msarraylist = ms = TestCompare(intArrayList, elements, lookups);
result += "intArrayList Compare Time " + ms.ToString() + " MS\n";
ms = TestCompareCastable(castableList, elements, lookups);
result += "castableList Compare Time " + ms.ToString() + " MS\n";
ratios += String.Format("round: {0}, ratio: {1}\n", time, (float)ms / msarraylist);
}
//MessageBox.Show(result);
MessageBox.Show(ratios);
int i = 10;
Castable o1 = new Castable();
o1.Set(i);
int j = 10;
Castable o2 = new Castable();
o2.Set(j);
if (!o1.Equals(10))
{
Console.WriteLine("unequal");
}
if (!o1.Equals(o2))
{
Console.WriteLine("unequal");
}
if (o1 != j)
{
Console.WriteLine("unequal");
}
int x = o1.Get<int>();
}
}
EDIT
In short I am trying to achieve:
#winSharp93: yes, in short:
List GenericGenericCollection = new List ();
GenericGenericCollection.Add(new string("a sonnet");
GenericGenericCollection.Add(42);
GenericGenericCollection.Add(new MyOwnCustomType);
EDIT AGAIN
There are two ways I found:
1. In .NET 4, a new 'dynamic' keyword is introduced. If you replace the line Object _o; with dynamic _o; you can use the code as it is. The problem is although dynamic supposed to be dynamic type, performance is just like boxing..
The performance can be improved by adding implicit (I prefer) or explicit casting operator instead of relying on generic == operator.
Based on http://igoro.com/archive/fun-with-c-generics-down-casting-to-a-generic-type/ I added following class. This takes care of boxing and performance - with following class performance is little better than ArrayList of int or Castable. Of course it has long way to go when List<int> compared.
The only problem, from my point of view is, once object is assigned to plain Any object to get concrete type embedded inside AnyInternal<T>. Neither I could find a way to have method T Get(). Even keyword dynamic fails at runtime at statment:
Any.AnyInternal<dynamic> any = (Any.AnyInternal<dynamic>)anyInstanceContainingAnyInternalForInt;
//too bad I can't seal Any after AnyInternal<T> has derived from it.
public abstract class Any
{
public static implicit operator int(Any any)
{
return Any.ToType<int>(any).Data;
}
public static AnyInternal<T> ToType<T>(Any any)
{
return ((AnyInternal<T>)any);
}
public class AnyInternal<T> : Any
{
private T _data;
public T Data { get { return _data; } }
public AnyInternal(T data)
{
_data = data;
}
}
}
Use the generic List<T> (inside System.Collections.Generic) instead of ArrayList.
There won't happen any boxing / unboxing for value types.
I have a struct called "Complex" in my project (I build it with using C#) and as the name of the struct implies, it's a struct for complex numbers. That struct has a built-in method called "Modulus" so that I can calculate the modulus of a complex number. The things are quite easy up to now.
The thing is, I create an array out of this struct and I want to sort the array according to the modulus of the complex numbers contained.(greater to smaller). Is there a way for that?? (Any algorithm suggestions will be welcomed.)
Thank you!!
Complex[] complexArray = ...
Complex[] sortedArray = complexArray.OrderByDescending(c => c.Modulus()).ToArray();
First of all, you can increase performances comparing squared modulus instead of modulus.
You don't need the squared root: "sqrt( a * a + b * b ) >= sqrt( c * c + d * d )" is equivalent to "a * a + b + b >= c * c + d * d".
Then, you can write a comparer to sort complex numbers.
public class ComplexModulusComparer :
IComparer<Complex>,
IComparer
{
public static readonly ComplexModulusComparer Default = new ComplexModulusComparer();
public int Compare(Complex a, Complex b)
{
return a.ModulusSquared().CompareTo(b.ModulusSquared());
}
int IComparer.Compare(object a, object b)
{
return ((Complex)a).ModulusSquared().CompareTo(((Complex)b).ModulusSquared());
}
}
You can write also the reverse comparer, since you want from greater to smaller.
public class ComplexModulusReverseComparer :
IComparer<Complex>,
IComparer
{
public static readonly ComplexModulusReverseComparer Default = new ComplexModulusReverseComparer();
public int Compare(Complex a, Complex b)
{
return - a.ModulusSquared().CompareTo(b.ModulusSquared());
}
int IComparer.Compare(object a, object b)
{
return - ((Complex)a).ModulusSquared().CompareTo(((Complex)b).ModulusSquared());
}
}
To sort an array you can then write two nice extension method ...
public static void SortByModulus(this Complex[] array)
{
Array.Sort(array, ComplexModulusComparer.Default);
}
public static void SortReverseByModulus(this Complex[] array)
{
Array.Sort(array, ComplexModulusReverseComparer.Default);
}
Then in your code...
Complex[] myArray ...;
myArray.SortReverseByModulus();
You can also implement the IComparable, if you wish, but a more correct and formal approach is to use the IComparer from my point of view.
public struct Complex :
IComparable<Complex>
{
public double R;
public double I;
public double Modulus() { return Math.Sqrt(R * R + I * I); }
public double ModulusSquared() { return R * R + I * I; }
public int CompareTo(Complex other)
{
return this.ModulusSquared().CompareTo(other.ModulusSquared());
}
}
And then you can write the ReverseComparer that can apply to every kind of comparer
public class ReverseComparer<T> :
IComparer<T>
{
private IComparer<T> comparer;
public static readonly ReverseComparer<T> Default = new ReverseComparer<T>();
public ReverseComparer<T>() :
this(Comparer<T>.Default)
{
}
public ReverseComparer<T>(IComparer<T> comparer)
{
this.comparer = comparer;
}
public int Compare(T a, T b)
{
return - this.comparer.Compare(a, b);
}
}
Then when you need to sort....
Complex[] array ...;
Array.Sort(array, ReverseComparer<Complex>.Default);
or in case you have another IComparer...
Complex[] array ...;
Array.Sort(array, new ReverseComparer<Complex>(myothercomparer));
RE-EDIT-
Ok i performed some speed test calculation.
Compiled with C# 4.0, in release mode, launched with all instances of visual studio closed.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace TestComplex
{
class Program
{
public struct Complex
{
public double R;
public double I;
public double ModulusSquared()
{
return this.R * this.R + this.I * this.I;
}
}
public class ComplexComparer :
IComparer<Complex>
{
public static readonly ComplexComparer Default = new ComplexComparer();
public int Compare(Complex x, Complex y)
{
return x.ModulusSquared().CompareTo(y.ModulusSquared());
}
}
private static void RandomComplexArray(Complex[] myArray)
{
// We use always the same seed to avoid differences in quicksort.
Random r = new Random(2323);
for (int i = 0; i < myArray.Length; ++i)
{
myArray[i].R = r.NextDouble() * 10;
myArray[i].I = r.NextDouble() * 10;
}
}
static void Main(string[] args)
{
// We perform some first operation to ensure JIT compiled and optimized everything before running the real test.
Stopwatch sw = new Stopwatch();
Complex[] tmp = new Complex[2];
for (int repeat = 0; repeat < 10; ++repeat)
{
sw.Start();
tmp[0] = new Complex() { R = 10, I = 20 };
tmp[1] = new Complex() { R = 30, I = 50 };
ComplexComparer.Default.Compare(tmp[0], tmp[1]);
tmp.OrderByDescending(c => c.ModulusSquared()).ToArray();
sw.Stop();
}
int[] testSizes = new int[] { 5, 100, 1000, 100000, 250000, 1000000 };
for (int testSizeIdx = 0; testSizeIdx < testSizes.Length; ++testSizeIdx)
{
Console.WriteLine("For " + testSizes[testSizeIdx].ToString() + " input ...");
// We create our big array
Complex[] myArray = new Complex[testSizes[testSizeIdx]];
double bestTime = double.MaxValue;
// Now we execute repeatCount times our test.
const int repeatCount = 15;
for (int repeat = 0; repeat < repeatCount; ++repeat)
{
// We fill our array with random data
RandomComplexArray(myArray);
// Now we perform our sorting.
sw.Reset();
sw.Start();
Array.Sort(myArray, ComplexComparer.Default);
sw.Stop();
double elapsed = sw.Elapsed.TotalMilliseconds;
if (elapsed < bestTime)
bestTime = elapsed;
}
Console.WriteLine("Array.Sort best time is " + bestTime.ToString());
// Now we perform our test using linq
bestTime = double.MaxValue; // i forgot this before
for (int repeat = 0; repeat < repeatCount; ++repeat)
{
// We fill our array with random data
RandomComplexArray(myArray);
// Now we perform our sorting.
sw.Reset();
sw.Start();
myArray = myArray.OrderByDescending(c => c.ModulusSquared()).ToArray();
sw.Stop();
double elapsed = sw.Elapsed.TotalMilliseconds;
if (elapsed < bestTime)
bestTime = elapsed;
}
Console.WriteLine("linq best time is " + bestTime.ToString());
Console.WriteLine();
}
Console.WriteLine("Press enter to quit.");
Console.ReadLine();
}
}
}
And here the results:
For 5 input ...
Array.Sort best time is 0,0004
linq best time is 0,0018
For 100 input ...
Array.Sort best time is 0,0267
linq best time is 0,0298
For 1000 input ...
Array.Sort best time is 0,3568
linq best time is 0,4107
For 100000 input ...
Array.Sort best time is 57,3536
linq best time is 64,0196
For 250000 input ...
Array.Sort best time is 157,8832
linq best time is 194,3723
For 1000000 input ...
Array.Sort best time is 692,8211
linq best time is 1058,3259
Press enter to quit.
My machine is an Intel I5, 64 bit windows seven.
Sorry! I did a small stupid bug in the previous edit!
ARRAY.SORT OUTPEFORMS LINQ, yes by a very small amount, but as suspected, this amount grows with n, seems in a not-so-linear way. It seems to me both code overhead and a memory problem (cache miss, object allocation, GC ... don't know).
You can always use SortedList :) Assuming modulus is int:
var complexNumbers = new SortedList<int, Complex>();
complexNumbers.Add(number.Modulus(), number);
public struct Complex: IComparable<Complex>
{
//complex rectangular number: a + bi
public decimal A
public decimal B
//synonymous with absolute value, or in geometric terms, distance
public decimal Modulus() { ... }
//CompareTo() is the default comparison used by most built-in sorts;
//all we have to do here is pass through to Decimal's IComparable implementation
//via the results of the Modulus() methods
public int CompareTo(Complex other){ return this.Modulus().CompareTo(other.Modulus()); }
}
You can now use any sorting method you choose on any collection of Complex instances; Array.Sort(), List.Sort(), Enumerable.OrderBy() (it doesn't use your IComparable, but if Complex were a member of a containing class you could sort the containing class by the Complex members without having to go the extra level down to comparing moduli), etc etc.
You stated you wanted to sort in descending order; you may consider multiplying the results of the Modulus() comparison by -1 before returning it. However, I would caution against this as it may be confusing; you would have to use a method that normally gives you descending order to get the list in ascending order. Instead, most sorting methods allow you to specify either a sorting direction, or a custom comparison which can still make use of the IComparable implementation:
//This will use your Comparison, but reverse the sort order based on its result
myEnumerableOfComplex.OrderByDescending(c=>c);
//This explicitly negates your comparison; you can also use b.CompareTo(a)
//which is equivalent
myListOfComplex.Sort((a,b) => return a.CompareTo(b) * -1);
//DataGridView objects use a SortDirection enumeration to control and report
//sort order
myGridViewOfComplex.Sort(myGridViewOfComplex.Columns["ComplexColumn"], ListSortDirection.Descending);
The docs for both DynamicInvoke and DynamicInvokeImpl say:
Dynamically invokes (late-bound) the
method represented by the current
delegate.
I notice that DynamicInvoke and DynamicInvokeImpl take an array of objects instead of a specific list of arguments (which is the late-bound part I'm guessing). But is that the only difference? And what is the difference between DynamicInvoke and DynamicInvokeImpl.
The main difference between calling it directly (which is short-hand for Invoke(...)) and using DynamicInvoke is performance; a factor of more than *700 by my measure (below).
With the direct/Invoke approach, the arguments are already pre-validated via the method signature, and the code already exists to pass those into the method directly (I would say "as IL", but I seem to recall that the runtime provides this directly, without any IL). With DynamicInvoke it needs to check them from the array via reflection (i.e. are they all appropriate for this call; do they need unboxing, etc); this is slow (if you are using it in a tight loop), and should be avoided where possible.
Example; results first (I increased the LOOP count from the previous edit, to give a sensible comparison):
Direct: 53ms
Invoke: 53ms
DynamicInvoke (re-use args): 37728ms
DynamicInvoke (per-cal args): 39911ms
With code:
static void DoesNothing(int a, string b, float? c) { }
static void Main() {
Action<int, string, float?> method = DoesNothing;
int a = 23;
string b = "abc";
float? c = null;
const int LOOP = 5000000;
Stopwatch watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method(a, b, c);
}
watch.Stop();
Console.WriteLine("Direct: " + watch.ElapsedMilliseconds + "ms");
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.Invoke(a, b, c);
}
watch.Stop();
Console.WriteLine("Invoke: " + watch.ElapsedMilliseconds + "ms");
object[] args = new object[] { a, b, c };
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.DynamicInvoke(args);
}
watch.Stop();
Console.WriteLine("DynamicInvoke (re-use args): "
+ watch.ElapsedMilliseconds + "ms");
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++) {
method.DynamicInvoke(a,b,c);
}
watch.Stop();
Console.WriteLine("DynamicInvoke (per-cal args): "
+ watch.ElapsedMilliseconds + "ms");
}
Coincidentally I have found another difference.
If Invoke throws an exception it can be caught by the expected exception type.
However DynamicInvoke throws a TargetInvokationException. Here is a small demo:
using System;
using System.Collections.Generic;
namespace DynamicInvokeVsInvoke
{
public class StrategiesProvider
{
private readonly Dictionary<StrategyTypes, Action> strategies;
public StrategiesProvider()
{
strategies = new Dictionary<StrategyTypes, Action>
{
{StrategyTypes.NoWay, () => { throw new NotSupportedException(); }}
// more strategies...
};
}
public void CallStrategyWithDynamicInvoke(StrategyTypes strategyType)
{
strategies[strategyType].DynamicInvoke();
}
public void CallStrategyWithInvoke(StrategyTypes strategyType)
{
strategies[strategyType].Invoke();
}
}
public enum StrategyTypes
{
NoWay = 0,
ThisWay,
ThatWay
}
}
While the second test goes green, the first one faces a TargetInvokationException.
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SharpTestsEx;
namespace DynamicInvokeVsInvoke.Tests
{
[TestClass]
public class DynamicInvokeVsInvokeTests
{
[TestMethod]
public void Call_strategy_with_dynamic_invoke_can_be_catched()
{
bool catched = false;
try
{
new StrategiesProvider().CallStrategyWithDynamicInvoke(StrategyTypes.NoWay);
}
catch(NotSupportedException exc)
{
/* Fails because the NotSupportedException is wrapped
* inside a TargetInvokationException! */
catched = true;
}
catched.Should().Be(true);
}
[TestMethod]
public void Call_strategy_with_invoke_can_be_catched()
{
bool catched = false;
try
{
new StrategiesProvider().CallStrategyWithInvoke(StrategyTypes.NoWay);
}
catch(NotSupportedException exc)
{
catched = true;
}
catched.Should().Be(true);
}
}
}
Really there is no functional difference between the two. if you pull up the implementation in reflector, you'll notice that DynamicInvoke just calls DynamicInvokeImpl with the same set of arguments. No extra validation is done and it's a non-virtual method so there is no chance for it's behavior to be changed by a derived class. DynamicInvokeImpl is a virtual method where all of the actual work is done.