Is it in anyway possible ( preferably without using any third party libs), to create a function whose type is determined at runtime in C#?
e.g
public static void myfunc(var x)
{
System.Windows.Forms.MessageBox.Show(x); //just an example
}
NOTE: I want the runtime to determine the type of the parameter and do not want to later cast the parameter to another type, as would be necessary if I use generics. e.g I don't want:
myfunc<T>(T x)
// and then :
MessageBox.Show((string)m);
UPDATE:
I am actually making a function parser for my programming language, which translates to C# code. In my language, I wanted the parameter types to be determined at runtime always. I was looking for some good C# feature for easy translation.
e.g
in my language syntax:
function msg << x
MessageBox.Show x
end
needed to be translated to something that didn't ask for a type at compile time, but would need one at runtime.
e.g
public static void msg(var x)
{
System.Windows.Forms.MessageBox.Show(x);
}
The keyword introduced for runtime binding in C# 4 is dynamic.
public static void myfunc(dynamic x)
This allows you to make assumptions about x that are unchecked at compile time but will fail at runtime if those assumptions prove invalid.
public static void MakeTheDuckQuack(dynamic duck)
{
Console.WriteLine(duck.Quack());
}
The assumption made here is that the parameter will have a method named Quack that accepts no arguments and returns a value that can then be used as the argument to Console.WriteLine. If any of those assumptions are invalid, you will get a runtime failure.
Given classes defined as
class Duck
{
public string Quack()
{
return "Quack!";
}
}
class FakeDuck
{
public string Quack()
{
return "Moo!";
}
}
And method calls
MakeTheDuckQuack(new Duck());
MakeTheDuckQuack(new FakeDuck());
MakeTheDuckQuack(42);
The first two succeed, as runtime binding succeeds, and the third results in an exception, as System.Int32 does not have a method named Quack.
Generally speaking, you would want to avoid this if possible, as you're essentially stipulating that an argument fulfill an interface of some sort without strictly defining it. If you are working in an interop scenario, then perhaps this is what you have to do. If you are working with types that you control, then you would be better served trying to achieve compile time safety via interfaces and/or base classes. You can even use different strategies (such as the Adapter Pattern) to make types you do not control (or cannot change) conform to a given interface.
If you need to know the type... then you need to know the type. You can't have your cake and eat it too.
First off, the cast in your example is unnecessary as all objects implement ToString(). Instead of telling us what you think you need, tell us what problem you are trying to solve. There is almost certainly a solution either via generics or the use of the dynamic keyword (though dynamic is rarely needed), but we need more info. If you add more I'll update this answer.
You could use a type of object or, if you don't know how many items are available, you could use a params object array, i.e. params object[] cParams.
Related
I'm trying to make a user-friendly debug framework where users can create more debug variables as easily as possible.
I need to cast an object to the return type of my property/method (bool, int, whatever), without knowing what that return type is.
tldr: How can I return a non-generic type (in this example bool) from
public bool MyGetSetProperty {
get {
object obj = new object();
return (bool)obj;
}
}
WITHOUT specifying "return (bool)"? So something like
return (GenericThingHereThatPassesAsBool)obj;
or
return obj as MyGetSetPropertyReturnType;
----------
Detail:
I want users to be able to create new properties in this class as easily as possible - basically copying+pasting the whole code block below, and only replacing "SerializeAll" with their variable name, and the type declaration "bool" with the type they want on the field/property declarations.
In my getter, I have a couple separate checks to see if the entire debug system is enabled. If not, it returns a default value for the given variable.
[Tooltip ("Serialize ALL XML output fields?"), SerializeField]
private bool debugSerializeAll = false;
/// <summary>
/// Serialize ALL XML output fields?
/// </summary>
[DebugValue, DebugDefault (true)]
public bool SerializeAll {
get {
if (!classEnabled || !debug.debugEnabled)
return (bool)GetDefaultValue (MethodBase.GetCurrentMethod ());
return debugSerializeAll;
}
set { debugSerializeAll = value; }
}
The thing is, I can't return "default" because the default value can be overridden - see the "DebugDefault" attribute where the "default" value for this bool is actually "true", at least as far as my debug system is concerned. The method "GetDefaultValue" accommodates for that, and it returns an object that could be a string, int, bool, anything.
I'm already doing funky reflection stuff to access the MethodInfo, PropertyInfo, etc of the getter and property SerializeAll. I just can't figure out how to not have to also specify the (bool) cast on the return. Again, the goal is as little human editing as possible.
Thank you!
You should be able to do this with a cast to dynamic.
return (dynamic)GetDefaultValue (MethodBase.GetCurrentMethod ());
Bear in mind that the compiler isn't actually making this into a cast to bool. Rather, this makes the compiler ignore compile-time type-safety, and instead the program will use reflection at runtime to figure out the best way to take the value returned from GetDefaultValue and turn it into what it needs to be.
I want users to be able to create new properties in this class as easily as possible...
This is a good principle.
... basically copying+pasting the whole code block below, and only replacing "SerializeAll" with their variable name, and the type declaration "bool" with the type they want on the field/property declarations.
That totally breaks the principle you just mentioned, and results in a bunch of boilerplate code and other code smells.
In theory, you could probably create a Fody Weaver or something to add this boilerplate code upon compilation. But that's probably more work than it's worth.
I would hazard a guess that this is an "XY Problem", where you're asking how to achieve the solution that you've imagined, rather than asking how to solve the problem you're actually facing.
Why should every property in your class return a completely different value if certain private fields are set a certain way? This sounds like a big Separation of Concerns problem, where you're tasking your class with doing two completely different things. I strongly suggest you find another way to solve the problem you're trying to solve. For example, when code tries to get an instance of your class, it could go through a method that checks the classEnabled and debug.debugEnabled concepts (which probably belong in a different class), and returns an instance with the properties all set to their defaults.
Please Link click here -> How to cast Object to boolean?
or
I think you need to study for Generic class
Check if a class is derived from a generic class
Quick question.
In the second example on this documentation page (the second code block, featuring a method called CompareDinosByLength), the Sort method is called as such:
dinosaurs.Sort(CompareDinosByLength);
Why is it that the Sort method didn't need an explicitly declared delegate, as I would have thought by reading the Delegate documentation? Before I found that example, I was attempting to do it like so:
delegate int CompareDinosDel(string first, string second);
CompareDinosDel newDel = CompareDinosByLength;
dinosaurs.Sort(newDel);
But I kept getting errors related to the delegate / delegate method not being proper Comparers.
Shouldn't both work?
Why is it that the Sort method didn't need an explicitly declared delegate?
C# permits a method group -- that is, a method which is named without having the (...) argument list to invoke it -- to be used in a context where a delegate is expected. The compiler performs overload resolution on the method group as though the method group had been invoked with arguments of the types of the delegate's formal parameters. This determines which method of the method group should be used to create the delegate.
This overload resolution process can sometimes lead to unusual situations involving method type inference when the method group is undergoing overload resolution to a delegate type which is a formal parameter type of a generic method; Sort, fortunately is not a generic method, so these oddities do not come into play.
This feature was added to C# 2.0; before that a method group had to be converted to a delegate via
new MyDelegate(MyMethod)
I keep getting errors related to the delegate / delegate method not being proper Comparers. Shouldn't both work?
Unfortunately, no. C# does not have structural identity on delegate types. That is:
delegate void Foo();
delegate void Bar();
...
Foo foo = ()=>{};
Bar bar = foo; // ERROR!
Even though Foo and Bar are structurally identical, the compiler disallows the conversion. You can however use the previous trick:
Bar bar = foo.Invoke;
This is equivalent to
Bar bar = new Bar(foo.Invoke);
However the new bar has as its action to invoke foo; it goes through a level of indirection.
This feature does make some sense.
Reason one:
You don't expect structural identity to work in other places:
struct Point { int x; int y; ... }
struct Pair { int key; int value; ... }
....
Point point = whatever;
Pair pair = point; // ERROR
Reason two:
You might want to say:
delegate int PureMethod(int);
And have a convention that PureMethod delegates are "pure" -- that is, the methods they represent do not throw, always return, return a value computed only from their argument, and produce no side effects. It should be an error to say
Func<int, int> f = x => { Console.WriteLine(x); return x+1; };
PureMethod p = f;
Because f is not pure.
However in hindsight people do not actually make semantics-laden delegates. It is a pain point that a value of type Predicate<int> cannot be assigned to a variable of type Func<int, bool> and vice versa.
If we had to do it all over again, I suspect that delegates would have structural identity in the CLR.
Finally, I note that VB is much more forgiving about inter-assigning mixed delegate types; it automatically builds an adapter delegate if it needs to. This can be confusing because sometimes it looks like referential identity is maintained when in fact it is not, but this is in keeping with the VB philosophy of "just make my code work".
dinosaurs.Sort(CompareDinosByLength);
CompareDinosDel newDel = CompareDinosByLength;
dinosaurs.Sort(newDel);
Shouldn't both work?
No, because you are passing two very different things into those two function calls.
The key here is to recognize that, in both cases, what you actually pass into the method is a delegate. In the first case, the compiler is implicitly creating a delegate of the correct type for you, even though you didn't explicitly ask it to. In the second case, you're making your own delegate, but it's the wrong type, so that attempt will fail.
Starting with .NET 2.0, the C# compiler allow you to skip explicitly create delegates in many situations. If you use a method name in a context where a delegate is expected, and the compiler can verify that the method signature and delegate signature match, it will implicitly construct a delegate instance using the method. That is, instead of doing this (the "old" way)
this.SubmitButton.Click += new System.EventHandler(this.SubmitButton_Click);
You can now do this:
this.SubmitButton.Click += this.SubmitButton_Click;
Visual Studio itself will still generate the older syntax, I assume because it still works and because it's not worth the developer's time to go messing around with it for very little benefit. However, most popular code analysis tools will flag the redundant delegate creation if you use it in your own code.
This same technique works anywhere you have a method (technically a "method group", since one method name can refer to more than one overload), and you assign it to a variable of a delegate type. Passing a method as a parameter into another method is the same type of assignment operation: you are "assigning" the actual parameter at the call site to the formal parameter in the method body, so the compiler does the same thing. In other words, the following two method calls do exactly the same thing:
dinosaurs.Sort(CompareDinosByLength);
dinosaurs.Sort(new Comparison<string>(CompareDinosByLength));
Your unsuccessful attempt to make a delegate, on the other hand, did something slightly different:
dinosaurs.Sort(new CompareDinosDel(CompareDinosByLength));
This time, you told the compiler exactly what kind of delegate you wanted, but that's not the kind of delegate that the method expected. In general, the compiler isn't going to try to second guess what you told it do to; if you ask it to do something that looks "fishy", it will produce an error (in this case, a type mismatch error).
This behavior is similar to what would happen if you tried to do this:
public class A
{
public int x;
}
public class B
{
public int x;
}
public void Foo(A a) { }
public void Bar()
{
B b = new B();
this.Foo(b);
}
In this case, A and B are two distinct types, even though their "type signature" is exactly the same. Any line of code that works on an A will also work equally well on a B, but yet, we cannot use them interchangeably. Delegates are types like any other types, and C#'s type safety rules require that we use the correct delegate types where we need them, and can't get away with just using a close enough type.
The reason this is a good thing is because a delegate type may have a lot more meaning that just it's technical components would imply. Like any other data type, when we create delegates for our applications, we usually apply some kind of semantic meaning to those types. We expect, for example, that if we have a ThreadStart delegate, that it's going to be associated with a method that runs when a new thread starts. the delegate's signature is about as simple as you get (no parameters, no return value) but the implication behind the delegate is very important.
Because of that, we generally want the compiler to tell us if we try to use the wrong delegate type in the wrong place. More often than not, that's probably a sign that we are about to do something that may compile, and even run, but is likely to do the wrong thing. That's never something you want from your program.
While all that is true, it's also true that often times you really don't want to assign any semantic meaning to your delegates, or else, the meaning is assigned by some other part of your application. Sometimes you really do just want to pass around an arbitrary piece of code that has to run later. This is very common with functional-style programs or asynchronous programs, where you get things like continuations, callbacks, or user-supplied predicates (look at the various LINQ methods, for example). .NET 3.5 and onward supply a very useful set of completely generic delegates, in the Action and Func family, for this purpose.
Consider the following code:
public class Foo
{
public int Bar { get; set; }
}
public class SomeOtherFoo
{
public int Bar { get; set; }
}
Should I be able to say:
Foo foo = new SomeOtherFoo();
That won't work in C# either. When you have two different types that have the same body/implementation, they are still different types. Two classes with the same properties are still different classes. Two different delegates with the same signature are still different delegates.
The Sort method has already defined the delegate type, and you need to match it. This is very much like it defining a class that it needs to accept as a parameter; you can't just pass in another type with the same properties and methods.
This is what it means for a language to be statically typed. An alternate type system would be to use "Duck Typing" in which the language doesn't apply the constraint that a variable be of a specific type, but rather that it has a specific set of members. In other words, "If it walks like a duck, and quacks like a duck, pretend it's a duck." That is opposed to the style of typing that says, "It must be a duck, period, even if it knows how to walk and quack."
Say I have a method that is overloaded such as void PrintInfo(Person) and void PrintInfo(Item), and so on. I try to invoke these methods by passing in an Object.
I'm wondering why it is giving me an error when I do this; aren't all classes inherited from Object? I want to avoid doing an if/switch statement where I check which type the Object is before calling the appropriate method.
What do you guys think is the best approach in this case?
All Persons are objects , but not all objects are Persons. Because of this you can pass a Person to a method that accepts an object but you can't pass an object to a method that requires a Person.
It sounds like you have some common bit of functionality between various objects that you want to use. Given this, it would be best to find either a common ancestor that has all of the functionality that you need, or an interface that they all implement (that again provides everything that you need).
In the case of printing, you may just need the ToString method. In that case, you can just have the method accept an object and call ToString on it. (That's what many print methods do, such as Console.WriteLine.
You need to understand that because C# is a statically typed language (barring dynamic) the particular overload that is chosen (called overload resolution) is determined at compile time, not run time. That means that the compiler needs to be able to unequivocally determine what type your argument is. Consider:
Object foo;
foo = "String";
foo = 5;
PrintInfo(foo); // Which overload of printinfo should be called? The compiler doesn't know!
There are a few ways to solve this- making foo of type dynamic is one- that will cause the correct overload to be chosen at compile time. The problem with that is that you lose type safety- if you don't have an appropriate overload for that type, your application will still compile but will crash when you try to print the unsupported type's info.
An arguably better approach is to ensure that foo is always of the correct type, rather than just Object.
As #Servy suggests, another approach is to attach the behavior to the type itself. You could, for instance, make an interface IHasPrintInfo:
public interface IHasPrintInfo { String PrintInfo { get; } }
and implement that interface on all items whose info you might print. Then your PrintInfo function can just take an IPrintInfo:
public void PrintInfo(IPrintInfo info) {
Console.WriteLine(info.PrintInfo);
}
here its ambiguate for compiler; compiler can't figure out which version of method (Person/Item) you are intended to call.
I wonder why it is not possible a method parameter as var type like
private void myMethod(var myValue) {
// do something
}
You can only use var for variables inside the method body. Also the variable must be assigned at declaration and it must be possible to deduce the type unambiguously from the expression on the right-hand side.
In all other places you must specify a type, even if a type could in theory be deduced.
The reason is due to the way that the compiler is designed. A simplified description is that it first parses everything except method bodies and then makes a full analysis of the static types of every class, member, etc. It then uses this information when parsing the method bodies, and in particular for deducing the type of local variables declared as var. If var were allowed anywhere then it would require a large change to the way the compiler works.
You can read Eric Lippert's article on this subject for more details:
Why no var on fields?
Because the compiler determines the actual type by looking at the right hand side of the assignment. For example, here it is determined to be a string:
var s = "hello";
Here it is determined to be Foo:
var foo = new Foo();
In method arguments, there is no "right hand side of the assignment", so you can't use var.
See the posting by Eric Lippert about why var is not allowed on fields, which also contains the explanation why it doesn't work in method signatures:
Let me give you a quick oversimplification of how the C# compiler works. First we run through every source file and do a "top level only" parse. That is, we identify every namespace, class, struct, enum, interface, and delegate type declaration at all levels of nesting. We parse all field declarations, method declarations, and so on. In fact, we parse everything except method bodies; those, we skip and come back to them later.
[...]
if we have "var" fields then the type of the field cannot be determined until the expression is analyzed, and that happens after we already need to know the type of the field.
Please see Juliet's answer for a better answer to this question.
Because it was too hard to add full type inference to C#.
Other languages such as Haskell and ML can automatically infer the most general type without you having to declare it.
The other answers state that it's "impossible" for the compiler to infer the type of var but actually it is possible in principle. For example:
abstract void anotherMethod(double z, double w);
void myMethod<T>(T arg)
{
anotherMethod(arg, 2.0); // Now a compiler could in principle infer that arg must be of type double (but the actual C# compiler can't)
}
Have "var" method parameters is in principle the same thing as generic methods:
void myMethod<T>(T arg)
{
....
}
It is unfortunate that you can't just use the same syntax for both but this is probably due to the fact that that C#'s type inference was added only later.
In general, subtle changes in the language syntax and semantics can turn a "deterministic" type inference algorithm into an undecidable one.
ML, Haskell, Scala, F#, SML, and other languages can easily figure out the type from equivalent expressions in their own language, mainly because they were designed with type-inference in mind from the very start. C# wasn't, its type-inference was tacked on as a post-hoc solution to the problem of accessing anonymous types.
I speculate that true Hindley-Milner type-inference was never implemented for C# because its complicated to deduce types in a language so dependent on classes and inheritance. Let's say I have the following classes:
class Base { public void Print() { ... } }
class Derived1 : Base { }
class Derived2 : Base { }
And now I have this method:
var create() { return new Derived1(); }
What's the return type here? Is it Derived1, or should it be Base? For that matter, should it be object?
Ok, now lets say I have this method:
void doStuff(var someBase) { someBase.Print(); }
void Main()
{
doStuff(new Derived1());
doStuff(new Derived2()); // <-- type error or not?
}
The first call, doStuff(new Derived1()), presumably forces doStuff to the type doStuff(Derived1 someBase). Let's assume for now that we infer a concrete type instead of a generic type T.
What about the second call, doStuff(new Derived1())? Is it a type error, or do we generalize to doStuff<T>(T somebase) where T : Base instead? What if we made the same call in a separate, unreferenced assembly -- the type inference algorithm would have no idea whether to use the narrow type or the more genenarlized type. So we'd end up with two different type signatures based on whether method calls originate from inside the assembly or a foreign assembly.
You can't generalize wider types based on usage of the function. You basically need to settle on a single concrete type as soon as you know which concrete type is being pass in. So in the example code above, unless you explicitly cast up to the Base type, doStuff is constrained to accept types of Derived1 and the second call is a type error.
Now the trick here is settling on a type. What happens here:
class Whatever
{
void Foo() { DoStuff(new Derived1()); }
void Bar() { DoStuff(new Derived2()); }
void DoStuff(var x) { ... }
}
What's the type of DoStuff? For that matter, we know based on the above that one of the Foo or Bar methods contain a type error, but can you tell from looking which has the error?
Its not possible to resolve the type without changing the semantics of C#. In C#, order of method declaration has no impact on compilation (or at least it shouldn't ;) ). You might say instead that the method declared first (in this case, the Foo method) determines the type, so Bar has an error.
This works, but it also changes the semantics of C#: changes in method order will change the compiled type of the method.
But let's say we went further:
// Whatever.cs
class Whatever
{
public void DoStuff(var x);
}
// Foo.cs
class Foo
{
public Foo() { new Whatever().DoStuff(new Derived1()); }
}
// Bar.cs
class Bar
{
public Bar() { new Whatever().DoStuff(new Derived2()); }
}
Now the methods is being invoked from different files. What's the type? Its not possible to decide without imposing some rules on compilation order: if Foo.cs gets compiled before Bar.cs, the type is determined by Foo.cs.
While we can impose those sorts of rules on C# to make type inference work, it would drastically change the semantics of the language.
By contrast, ML, Haskell, F#, and SML support type inference so well because they have these sorts of restrictions: you can't call methods before they're declared, the first method call to inferred functions determines the type, compilation order has an impact on type inference, etc.
The "var" keyword is used in C# and VB.NET for type inference - you basically tell the C# compiler: "you figure out what the type is".
"var" is still strongly typed - you're just too lazy yourself to write out the type and let the compiler figure it out - based on the data type of the right-hand side of the assignment.
Here, in a method parameter, the compiler has no way of figuring out what you really meant. How? What type did you really mean? There's no way for the compiler to infer the type from the method definition - therefore it's not a valid statement.
Because c# is type safe and strong type language. At any place of your program compiler always knows the type of argument you are using. var keyword was just introduced to have variables of anonymus types.
Check dynamic in C# 4
Type inference is type inference, either in local expressions or global / interprocedural. So it isn't about "not having a right hand side", because in compiler theory, a procedure call is a form of "right hand side".
C# could do this if the compiler did global type inference, but it does not.
You can use "object" if you want a parameter that accepts anything, but then you need to deal with the runtime conversion and potential exceptions yourself.
"var" in C# isn't a runtime type binding, it is a compile time feature that ends up with a very specific type, but C# type inference is limited in scope.
hey guys, I've removed some of the complexities of my needs to the core of what I need to know.
I want to send a collection of Values to a method, and inside that method I want to test the Value against, say, a property of an Entity. The property will always be of the same Type as the Value.
I also want to test if the value is null, or the default value, obviously depending on whether the value type is a reference type, or a value type.
Now, if all the values sent to the method are of the same type, then I could do this using generics, quite easily, like this:
public static void testGenerics<TValueType>(List<TValueType> Values) {
//test null/default
foreach (TValueType v in Values) {
if (EqualityComparer<TValueType>.Default.Equals(v, default(TValueType))) {
//value is null or default for its type
} else {
//comapre against another value of the same Type
if (EqualityComparer<TValueType>.Default.Equals(v, SomeOtherValueOfTValueType)) {
//value equals
} else {
//value doesn't equal
}
}
}
}
My questions is, how would I carry out the same function, if my Collection contained values of different Types.
My main concerns are successfully identifying null or default values, and successfully identifying if each value passed in, equals some other value of the same type.
Can I achieve this by simply passing the type object? I also can't really use the EqualityComparers as I can't use generics, because I'm passing in an unknown number of different Types.
is there a solution?
thanks
UPDATE
ok, searching around, could I use the following code to test for null/default successfully in my scenario (taken from this SO answer):
object defaultValue = type.IsValueType ? Activator.CreateInstance(type) : null;
I reckon this might work.
Now, how can I successfully compare two values of the same Type, without knowing their types successfully and reliably?
There is Object.Equals(object left, object right) static method, it internally relies on Equals(object) implementation available at one of provided arguments. Why do you avoid using it?
The rules of implementing equality members are nearly the following:
Required: Override Equals(object) and GetHashCode() methods
Optional: Implement IEquatable<T> for your type (this is what EqualityComparer.Default relies on)
Optional: Implement == and != operators
So as you see, if you'll rely on object.Equals(object left, object right), this will be the best solution relying on strongly required part of equality implementation pattern.
Moreover, it will be the fastest option, since it relies just on virtual methods. Otherwise you'll anyway involve some reflection.
public static void TestGenerics(IList values) {
foreach (object v in values) {
if (ReferenceEquals(null,v)) {
// v is null reference
}
else {
var type = v.GetType();
if (type.IsValueType && Equals(v, Activator.CreateInstance(type))) {
// v is default value of its value type
}
else {
// v is non-null value of some reference type
}
}
}
}
The short answer is "yes", but the longer answer is that it's possible but will take a non-trivial amount of effort on your part and some assumptions in order to make it work. Your issue really comes when you have values that would be considered "equal" when compared in strongly-typed code, but do not have reference equality. Your biggest offenders will be value types, as a boxed int with a value of 1 won't have referential equality to another boxed int of the same value.
Given that, you have to go down the road of using things like the IComparable interface. If your types will always specifically match, then this is likely sufficient. If either of your values implements IComparable then you can cast to that interface and compare to the other instance to determine equality (==0). If neither implements it then you'll likely have to rely on referential equality. For reference types this will work unless there is custom comparison logic (an overloaded == operator on the type, for example).
Just bear in mind that the types would have to match EXACTLY. In other words, an int and an short won't necessarily compare like this, nor would an int and a double.
You could also go down the path of using reflection to dynamically invoke the Default property on the generic type determined at runtime by the supplied Type variable, but I wouldn't want to do that if I didn't have to for performance and compile-time safety (or lack thereof) reasons.
Is the list of types you need to test a pre-determined list? If so, you can use the Visitor Pattern (and maybe even if not since we have Generics). Create a method on your Entities (can be done using partial classes) that takes in an interface. Your class then calls a method on that interface passing itself. The interface method can be generic, or you can create an overload for each type you want to test.
Battery about to die otherwise would give example.
Fifteen seconds after hitting "Save" the machine went into hibernate.
After thinking about it, the Visitor pattern might not solve your specific problem. I thought you were trying to compare entities, but it appears you are testing values (so potentially ints and strings).
But for the sake of completion, and because the visitor pattern is kind of cool once you realize what it does, here's an explanation.
The Visitor pattern allows you to handle multiple types without needing to figure out how to cast to the specific type (you decouple the type from the item using that type). It works by having two interfaces - the visitor and the acceptor:
interface IAcceptor
{
void Accept(IVisitor visitor);
}
interface IVisitor
{
void Visit(Type1 type1);
void Visit(Type2 type2);
.. etc ..
}
You can optionally use a generic method there:
interface IVisitor
{
void Visit<T>(T instance);
}
The basic implementation of the accept method is:
void Accept(IVisitor visitor)
{
visitor.Visit(this);
}
Because the type implementing Accept() knows what type it is, the correct overload (or generic type) is used. You could achieve the same thing with reflection and a lookup table (or select statement), but this is much cleaner. Also, you don't have to duplicate the lookup among different implementations -- various classes can implement IVisitor to create type-specific functionality.
The Visitor pattern is one way of doing "Double Dispatch". The answer to this question is another way and you might be able to morph it into something that works for your specific case.
Basically, a long-winded non-answer to your problem, sorry. :) The problem intrigues me, though -- like how do you know what property on the entity you should test against?