I am trying to figure out how to do an Equal on the following code:
Int32? testValue = 264;
MyTable.Where(a=>a.MyNullableNumberField.Equals(testValue));
I am stuck with the nullable field and I have to make the Equals method work. I know that I could use "==" and it would work but in this case I have to use the Equals method.
The error that I get returned is:
Unable to create a constant value of type 'System.Object'. Only primitive types or enumeration types are supported in this context.
You have to realize the difference between the == operator and .Equals(). Generally, the == operator compares references, while .Equals() is used to compare values. Of course, a comparison in the latter sense will only work with primitive types out of the box. Any object more complex than that (e.g. a user defined class) will need to have a custom .Equals() defined for it, to specify what exactly makes two instances of that object equal.
Normally, C# is smart enough to take care of things on its own when it comes to nullable numbers. However in certain cases (such as linq-to-entities) you have to push it in the right direction. This should work:
a.MyNullableNumberField.Value.Equals(testValue)
And to be totally safe, add a check right before to make sure the value exists:
a.MyNullableNumberField.HasValue && a.MyNullableNumberField.Value.Equals(testValue)
Related
Today I stumbled upon an interesting bug I wrote. I have a set of properties which can be set through a general setter. These properties can be value types or reference types.
public void SetValue( TEnum property, object value )
{
if ( _properties[ property ] != value )
{
// Only come here when the new value is different.
}
}
When writing a unit test for this method I found out the condition is always true for value types. It didn't take me long to figure out this is due to boxing/unboxing. It didn't take me long either to adjust the code to the following:
public void SetValue( TEnum property, object value )
{
if ( !_properties[ property ].Equals( value ) )
{
// Only come here when the new value is different.
}
}
The thing is I'm not entirely satisfied with this solution. I'd like to keep a simple reference comparison, unless the value is boxed.
The current solution I am thinking of is only calling Equals() for boxed values. Doing a check for a boxed values seems a bit overkill. Isn't there an easier way?
If you need different behaviour when you're dealing with a value-type then you're obviously going to need to perform some kind of test. You don't need an explicit check for boxed value-types, since all value-types will be boxed** due to the parameter being typed as object.
This code should meet your stated criteria: If value is a (boxed) value-type then call the polymorphic Equals method, otherwise use == to test for reference equality.
public void SetValue(TEnum property, object value)
{
bool equal = ((value != null) && value.GetType().IsValueType)
? value.Equals(_properties[property])
: (value == _properties[property]);
if (!equal)
{
// Only come here when the new value is different.
}
}
( ** And, yes, I know that Nullable<T> is a value-type with its own special rules relating to boxing and unboxing, but that's pretty much irrelevant here.)
Equals() is generally the preferred approach.
The default implementation of .Equals() does a simple reference comparison for reference types, so in most cases that's what you'll be getting. Equals() might have been overridden to provide some other behavior, but if someone has overridden .Equals() in a class it's because they want to change the equality semantics for that type, and it's better to let that happen if you don't have a compelling reason not to. Bypassing it by using == can lead to confusion when your class sees two things as different when every other class agrees that they're the same.
Since the input parameter's type is object, you will always get a boxed value inside the method's context.
I think your only chance is to change the method's signature and to write different overloads.
How about this:
if(object.ReferenceEquals(first, second)) { return; }
if(first.Equals(second)) { return; }
// they must differ, right?
Update
I realized this doesn't work as expected for a certain case:
For value types, ReferenceEquals returns false so we fall back to Equals, which behaves as expected.
For reference types where ReferenceEquals returns true, we consider them "same" as expected.
For reference types where ReferenceEquals returns false and Equals returns false, we consider them "different" as expected.
For reference types where ReferenceEquals returns false and Equals returns true, we consider them "same" even though we want "different"
So the lesson is "don't get clever"
I suppose
I'd like to keep a simple reference comparison, unless the value is boxed.
is somewhat equivalent to
If the value is boxed, I'll do a non-"simple reference comparison".
This means the first thing you'll need to do is to check whether the value is boxed or not.
If there exists a method to check whether an object is a boxed value type or not, it should be at least as complex as that "overkill" method you provided the link to unless that is not the simplest way. Nonetheless, there should be a "simplest way" to determine if an object is a boxed value type or not. It's unlikely that this "simplest way" is simpler than simply using the object Equals() method, but I've bookmarked this question to find out just in case.
(not sure if I was logical)
Link:
• Consider overriding Equals on a reference type if the semantics of
the type are based on the fact that the type represents some value(s).
• Most reference types must not overload the equality operator, even
if they override Equals. However, if you are implementing a reference
type that is intended to have value semantics, such as a complex
number type, you must override the equality operator.
a) To my understanding, for different instances of a reference type to be interchangeable, we should override both Equals method and the equality operator and also make the type immutable?
b) Doesn't a reference type having value semantics suggest that different instances ( that represent the same value ) of that type should be interchangeable?
c) But according to above quote, certain reference types with value semantics should only have Equals method overridden, but not also the equality operator. How can we claim such types have value semantics, since instances of that type are obviously not interchangeable?
d) So based on what criteria do we decide whether a reference type with value semantics should only have its Equals method overridden or also its equality operator? Simply based on whether or not we're willing to make the type immutable?
thanx
Regarding point A, yes, the type ought to be immutable. From MSDN:
You should not override Equals on a mutable reference type.
I think that D is the core question here, and the framework design guidelines seem to indicate that this comes down to performance:
AVOID overloading equality operators on reference types if the
implementation would be significantly slower than that of reference
equality.
Eric Lippert has some interesting things to say about this here. My favorite quote from it is:
The long answer is that the whole thing is weird and neither works the
way it ideally ought to.
Personally this lets me breathe a sigh of relief, as I have always been of the opinion that "==" is functionally a readable shorthand for Equals() (even though I know it isn't).
What is this code doing? Specifically the default(XX) part. I've never seen it before.
Entities.BizTalkRequestResult result = default(Entities.BizTalkRequestResult);
It's not a cast; it compiles to the default value of Entities.BizTalkRequestResult. For a reference type, e.g., that's probably null. See MSDN: http://msdn.microsoft.com/en-us/library/xwth0h0d(v=vs.80).aspx
There is a misconception; this is not casting at all. The default operator or function returns the default value. ex: 0 for int and null for reference types.
default is often used with generics (default(T)) because we don't know the actual type at compile time.
It gives you the default value for the particular type inside the parentheses. E.g. 0 for primitives numeric types like int or float, or null for reference types. It's useful particularly when the type could vary, and you want to write general code that is applicable for all possible types.
I thought i've seen it all but this... :)
I was working on a generic graph of type string,
Graph<string> graph = new Graph<string>();
Graph is declared with a class constraint like this:
public class Graph<T> where T : class
Next i fill up the graph with some dynamicly generated strings:
for (char t = 'A'; t < 'J'; t++)
{
GraphPrim.Add(t.ToString());
}
So far so good, (Node is a internal class containing the original value and a list of references to other nodes (because its a graph))
Now, when i try to create relations between the different nodes, i have to look up the right node by checking its value and thats where the weirdness starts.
The following code, is a direct copy of the result found in the immidiate window after doing some tests:
Nodes.First().Value
"A"
Nodes.First().Value == "A"
false
Nodes.First().Value.ToString() == "A"
true
Am i totally missing something or shouldn't Nodes.First().Value == "A" use a string comparison method. (The JIT compiler has knowledge about the type beeing used on runtime, and with that, its supported methods, right?). It seems to me like when not explicitly specifying a string, it will do a reference check rather then a string test.
It would be great if someone could explain this to me,
Thanks in advance!
If the types aren't fully known up front (i.e. Value is only known as T, and is not strictly known to be a string), use things like:
object.Equals(Nodes.First().Value,"A")
Of course, you could cast, but in this case you'd need a double-cast ((string)(object)) which is ugly.
If you know the two objects are the same type (i.e. two T values), then you can use:
EqualityComparer<T>.Default.Equals(x,y)
The advantage of the above is that it avoids boxing of structs and supports lifted Nullable<T> operators, and IEquatable<T> in addition to Equals.
If the Value property of your Nodes is object, the == operator in
Nodes.First().Value == "A"
will do a comparison by reference instead of comparing strings.
== is a static method and therefore not virtual. The selection of which == method to use is done at compile-time, not run-time. Depending on the compile-time type of the object, it is probably choosing the implementation of == for objects that compares by reference.
If you use the virtual Equals methods instead, this will work as you expect.
I've been searching for some good guidance on this since the concept was introduced in .net 2.0.
Why would I ever want to use non-nullable data types in c#? (A better question is why wouldn't I choose nullable types by default, and only use non-nullable types when that explicitly makes sense.)
Is there a 'significant' performance hit to choosing a nullable data type over its non-nullable peer?
I much prefer to check my values against null instead of Guid.empty, string.empty, DateTime.MinValue,<= 0, etc, and to work with nullable types in general. And the only reason I don't choose nullable types more often is the itchy feeling in the back of my head that makes me feel like it's more than backwards compatibility that forces that extra '?' character to explicitly allow a null value.
Is there anybody out there that always (most always) chooses nullable types rather than non-nullable types?
Thanks for your time,
The reason why you shouldn't always use nullable types is that sometimes you're able to guarantee that a value will be initialized. And you should try to design your code so that this is the case as often as possible. If there is no way a value can possibly be uninitialized, then there is no reason why null should be a legal value for it. As a very simple example, consider this:
List<int> list = new List<int>()
int c = list.Count;
This is always valid. There is no possible way in which c could be uninitialized. If it was turned into an int?, you would effectively be telling readers of the code "this value might be null. Make sure to check before you use it". But we know that this can never happen, so why not expose this guarantee in the code?
You are absolutely right in cases where a value is optional. If we have a function that may or may not return a string, then return null. Don't return string.Empty(). Don't return "magic values".
But not all values are optional. And making everything optional makes the rest of your code far more complicated (it adds another code path that has to be handled).
If you can specifically guarantee that this value will always be valid, then why throw away this information? That's what you do by making it a nullable type. Now the value may or may not exist, and anyone using the value will have to handle both cases. But you know that only one of these cases is possible in the first place. So do users of your code a favor, and reflect this fact in your code. Any users of your code can then rely on the value being valid, and they only have to handle a single case rather than two.
Because it's inconvenient to always have to check whether the nullable type is null.
Obviously there are situations where a value is genuinely optional, and in those cases it makes sense to use a nullable type rather than magic numbers etc, but where possible I would try to avoid them.
// nice and simple, this will always work
int a = myInt;
// compiler won't let you do this
int b = myNullableInt;
// compiler allows these, but causes runtime error if myNullableInt is null
int c = (int)myNullableInt;
int d = myNullableInt.Value;
// instead you need to do something like these, cumbersome and less readable
int e = myNullableInt ?? defaultValue;
int f = myNullableInt.HasValue ? myNullableInt : GetValueFromSomewhere();
I think the language designers feel that 'reference types being nullable by default' was a mistake, and that non-nullable is the only sensible default, and you should have to opt into nullness. (This is how it is in many modern functional languages.) "null" is usually a heap of trouble.
You seem to have 2 different questions...
Why would I ever want to use non-nullable data types in C#?
Simple, because the value-type data you're relying on is guaranteed by the compiler to actually have a value!
Why wouldn't I choose nullable types by default, and only use non-nullable types when that explicitly makes sense?
As Joel has already mentioned, a type can only be null if it is a reference type. Value types are guaranteed by the compiler to have a value. If your program depends on a variable to have a value, then this is the behavior you will want by not choosing a nullable type.
Of course, when your data is coming from anywhere that is not your program, then all bets are off. The best example is from a database. Database fields can be null, so you would want your program variable to mimic this value - not just create a "magic" value (i.e. -1, 0, or whatever) that "represents" null. You do this with nullable types.
Although null values can be convenient for using as "not-initialized-yet" or "not-specified" values, they make the code more complex, mainly because you're overloading the meaning of null as well as the variable (number-or-null vs. just-a-number).
NULL values are favoured by many database designers and SQL database programmers but with a small change in thinking about the problem you can do away with null values and actually have simpler and more reliable code (e.g., no worrying about NullReferenceExceptions).
There's actually a large demand for a "T!" operator that makes any reference type non-nullable, similar to how "T?" makes value types nullable, and Anders Hejlsberg, the inventor of C#, wished he had included the ability.
See also the question, Why is “null” present in C# and java?
I tend to use Nullable types wherever they make sense -- I won't care about performance until it's a problem, then I'll fix the few areas where it is and be done with it.
However, I also find that in general, most of my values end up being non-nullable. In fact, there are many times I'd actually like a NotNullable I can use with reference types to find out about a null problem when I get the null, not later on when I try to use it.
The only time that a Nullable Type should ever be used, is in the case that a certain field in a table of the database absolutely requires that a null be sent or received by the application at some point. Even in such a case, one should always try to find a way around using the Nullable Type. Bool isn't always your best friend.