Not to repeat this question too much, but I already did a search and came up empty on a result. So I have two EntityCollections of type T and I would like to find the common items in each. The catch? All fields except one must match. So for example, if type T is a type CustomSet, and CustomSet includes fields F1, F2, F3 and a FK field OtherId, F1, F2 and F3 must match (they could be strings, ints, anything really) and the OtherId will never match. My current implementation:
var intersections = source.Intersect(destination).ToList();
will never yield any results because that OtherId column will never match in any other collection, even though the fields F1, F2 and F3 may match. So I'm proposing a custom implementation of IEqualityComparer which looks like this:
var intersections = source.Intersect(destination, new EntityCollectionComparer<T>()).ToList();
public class EntityCollectionComparer<T> : IEqualityComparer<T>
{
#region IEqualityComparer<T> Members
public bool Equals(T x, T y)
{
if (x.Equals(y))
return true;
else
return false;
}
public int GetHashCode(T obj)
{
if (obj is CustomSet)
{
CustomSet temp = obj as CustomSet;
return (temp.F1.GetHashCode() ^ temp.F2.GetHashCode() ^ temp.F3.GetHashCode());
}
return obj.GetHashCode();
}
Now, I'm only testing this so the obj getting passed in is of type CustomSet, I will add the necessary if statements for my other types if I can get this to function properly. I know that the Intersect extension uses the GetHashCode instead of the Equals to compare items which is why I really don't care what's in my equals as this class will never be called but for the Intersect extension on EntityCollections. The thing is, this doesn't work. On my test set, I know I have 28 items in my 'source' collection, and 28 items in my 'destination' collection, and all the fields match (obviously except for the OtherId field). I stepped through the GetHashCode code as it looped 56 times and was able to match up hash codes on all 28 items from each set yet the 'intersections' yielded 0 count. Is there something I'm doing wrong, or missing? Thanks.
}
This is your problem:
I know that the Intersect extension uses the GetHashCode instead of the Equals to compare items which is why I really don't care what's in my equals as this class will never be called but for the Intersect extension on EntityCollections.
That's simply not true. GetHashCode is used as a first "quick" way of bucketing values, but Equals will still be called for any items with the same hash, otherwise you can't know they're equal.
That's the way hash tables etc always work: the hashes should be different for unequal values where feasible for performance reasons, but are allowed to collide.
Related
I am trying to implement an IEqualityComparer that has a tolerance on a date comparison. I have also looked into this question. The problem is that I can't use a workaround because I am using the IEqualityComparer in a LINQ .GroupJoin(). I have tried a few implementations that allow for tolerance. I can get the Equals() to work because I have both objects but I can't figure out how to implement GetHashCode().
My best attempt looks something like this:
public class ThingWithDateComparer : IEqualityComparer<IThingWithDate>
{
private readonly int _daysToAdd;
public ThingWithDateComparer(int daysToAdd)
{
_daysToAdd = daysToAdd;
}
public int GetHashCode(IThingWithDate obj)
{
unchecked
{
var hash = 17;
hash = hash * 23 + obj.BirthDate.AddDays(_daysToAdd).GetHashCode();
return hash;
}
}
public bool Equals(IThingWithDate x, IThingWithDate y)
{
throw new NotImplementedException();
}
}
public interface IThingWithDate
{
DateTime BirthDate { get; set; }
}
With .GroupJoin() building a HashTable out of the GetHashCode() it applies the days to add to both/all objects. This doesn't work.
The problem is impossible, conceptually. You're trying to compare objects in a way that doesn't have a form of equality that is necessary for the operations you're trying to perform with it. For example, GroupJoin is dependant on the assumption that if A is equal to B, and B is equal to C, then A is equal to C, but in your situation, that's not true. A and B may be "close enough" together for you to want to group them, but A and C may not be.
You're going to need to not implement IEqualityComparer at all, because you cannot fulfill the contract that it requires. If you want to create a mapping of items in one collection to all of the items in another collection that are "close enough" to it then you're going to need to write that algorithm yourself (doing so efficiently is likely to be hard, but doing so inefficiently isn't shouldn't' be that difficult), rather than using GroupJoin, because it's not capable of performing that operation.
I can't see any way to generate a logical hash code for your given criteria.
The hash code is used to determine if 2 dates should stick together. If they should group together, than they must return the same hash code.
If your "float" is 5 days, that means that 1/1/2000 must generate the same hash code as 1/4/2000, and 1/4/2000 must generate the same hashcode as 1/8/2000 (since they are both within 5 days of each other). That implies that 1/1/2000 has the same code as 1/8/2000 (since if a=b and b=c, a=c).
1/1/2000 and 1/8/2000 are outside the 5 day "float".
I tried to followed the Guidelines from MSDN and also referred to This great question but the following seems to not behave as expected.
I'm trying to represent a structure similar to a FQN where as if P1 was listed before P2, P2 would only exist in the same set as P1. Like how scope works.
On the subject of GetHashCode()
I have a class with properties like this.
class data{
public readonly string p1, p2;
public data(string p1, string p2) {
this.p1 = p1;
this.p2 = p2;
}
public override int GetHashCode()
{
return this.p1.GetHashCode() ^ this.p2.GetHashCode();
}
/*also show the equal for comparison*/
public override bool Equals(System.Object obj)
{
if (obj == null)
return false;
data d = obj as data;
if ((System.Object)d == null)
return false;
/*I thought this would be smart*/
return d.ToString() == this.ToString();
}
public override string ToString() {
return "[" + p1 +"][" + p2+ "]";
}
}
In a Dictionary (dict) I use data as a key, so this would make the scope look like d1.p1.p2 (or rather p2 of p1 of d1, however you prefer to imagine it)
Dictionary<data,int> dict = new Dictionary<data,int>();
I've examined the behavior when d1.p1 and another d2.p1 are different, the operation resolves correctly. However when d1.p1 and d2.p1 are the same and p2 of d1 and d2 are different I observe the following behavior.
data d1 = new data(){ p1="p1", p2="p2" };
data d2 = new data(){ p1="p1", p2="XX" };
dict.add(d1, 0);
dict.add(d2, 1);
dict[d1] = 4;
The result is that both elements are 4
Is GetHashCode() overridden correctly?
Is Equals overridden correctly?
If they are both fine how/why does this behavior happen?
On the subject of a Dictionary
In the watch window (VS2013), I have my dictionary's key list show me, instead of a single key per index as I would normally expect, each property of my data object is a key for a single index. So I'm not sure if in there lay the problem or I'm just misunderstanding the Watch window's representation of an object as a key. I know how that is the way VS will display an object but, I'm not certain that's how I would expect it to be displayed for a key in a dictionary.
I thought GetHashCode() was a Dictionary's primary "comparison" operation, is this always correct?
What's the real "Index" to a Dictionary where the key is an object?
UPDATE
After looking at each hash code directly I noticed that they do repeat. Yet the Dictionary does not determine that the index exists. Below is an example of the data I see.
1132917379 string: [ABC][ABC]
-565659420 string: [ABC][123]
-1936108909 string: [123][123]
//second loop with next set of strings
1132917379 string: [xxx][xxx]
-565659420 string: [xxx][yyy]
//...etc
Is GetHachCode() overridden correctly?
Sure, for some definition of "correct". It may not be overridden well, but it's not an incorrect implementation (since two instances of the class that are considered equal will hash to the same value). Of course with that requirement you could always just return 0 from GetHashCode and it would be "correct". It certainly wouldn't be good.
That said your particular implementation is not as good as it could be. For example, in your class, the order of the strings matter. I.e. new data( "A", "B" ) != new data( "B", "A" ). However, these will always hash equal because your GetHashCode implementation is symmetric. Better to break the symmetry in some fashion. For example:
public int GetHashCode()
{
return p1.GetHashCode() ^ ( 13 * p2.GetHashCode() );
}
Now it's less likely that there will be a collision for two instances that are not equal.
Is Equal overridden correctly?
Well, it can definitely be improved. For example, the first null check is redundant and so is the cast to object in the second comparison. The whole thing would probably be better written as:
public bool Equals( object obj )
{
var other = obj as data;
if( other == null ) return false;
return p1 == obj.p1 && p2 == obj.p2;
}
I also removed the call to ToString since it doesn't significantly simplify the code or make it more readable. It's also an inefficient way of performing the comparison, since you have to construct two new strings before the comparison even happens. Just comparing the members directly gives you more opportunities for an early out and, more importantly, is a lot easier to read (the actual equality implementation doesn't depend on the string representation).
If they are both fine how/why does this behavior happen?
I don't know, because the code you've given won't do this. It also won't compile. Your data class has two readonly fields, you can't assign those using an initializer list as you've shown in your last code snippet.
I can only speculate as to the behavior you're seeing, because nothing you've shown here would result in the behavior as described.
The best advice I can give is to make sure that your key class is not mutable. Mutable types will not play nice with Dictionary. The Dictionary class does not expect the hash codes of objects to change, so if GetHashCode depends on any part of your class that is mutable, it's very possible for things to get very messed up.
I thought GetHachCode() was a Dictionary's primary "comparison" operation, is this always correct?
Dictionary only uses GetHashCode as a way to "address" objects (to be specific the hash code is used to identify which bucket an item should be placed in). It doesn't use it directly as a comparison necessarily. And if it does, it can only use it to determine that two objects are not equal, it can't use it to determine if they are equal.
What's the real "Index" to a Dictionary where the key is an object?
I'm not entirely sure what you're asking here, but I'm inclined to say that the answer is that it doesn't matter. Where the item goes is unimportant. If you care about that sort of thing, you probably shouldn't be using a Dictionary.
Is GetHashCode() overridden correctly?
No. You allow passing null for p1 or p2 and null.GetHashCode() throws a NullReferenceException which is not allowed in GetHashCode. Either forbid passing null, or make GetHashCode return an int for nulls (zero is OK).
You are also XORing unaltered ints; this means every class you create that contain two of the same values will have a hashCode of zero. This is a very common collision; typically one multiplies each hashcode by a prime number to avoid this.
Is Equals overridden correctly?
No. The page you linked to is the non-generic Equals used by System.Collections.HashTable. You are using the generic System.Collections.Generic.Dictionary, which uses the generic IEquatable<T>. You need to implement IEquatable<data> as explained in the accepted answer to the SO question you posted.
It is true that IEquatable<data> will fall back to Equals(System.Object obj) if not defined, but do not rely on that behavior. Also, converting ints to strings to compare them is not “smart”. Any time you feel you should write a comment excusing something as “smart” you are making a mistake.
A better implementation of 'data` would be:
public class MatPair : IEquatable<MatPair>
{
public readonly string MatNeedsToExplainWhatThisRepresents;
public readonly string MatNeedsToExplainThisToo;
public MatPair(string matNeedsToExplainWhatThisRepresents,
string matNeedsToExplainThisToo)
{
if (matNeedsToExplainWhatThisRepresents == null) throw new ArgumentNullException("matNeedsToExplainWhatThisRepresents");
if (matNeedsToExplainThisToo == null) throw new ArgumentNullException("matNeedsToExplainThisToo");
this.MatNeedsToExplainWhatThisRepresents = matNeedsToExplainWhatThisRepresents;
this.MatNeedsToExplainThisToo = matNeedsToExplainThisToo;
}
[Obsolete]
public override bool Equals(object obj)
{
return Equals(obj as MatPair);
}
public bool Equals(MatPair matPair)
{
return matPair != null
&& matPair.MatNeedsToExplainWhatThisRepresents == MatNeedsToExplainWhatThisRepresents
&& matPair.MatNeedsToExplainThisToo == MatNeedsToExplainThisToo;
}
public override int GetHashCode()
{
unchecked
{
return MatNeedsToExplainWhatThisRepresents.GetHashCode() * 31
^ MatNeedsToExplainThisToo.GetHashCode();
}
}
public override string ToString()
{
return "{" + MatNeedsToExplainWhatThisRepresents + ", "
+ MatNeedsToExplainThisToo + "}";
}
}
Here is the equality comparer I just wrote because I wanted a distinct set of items from a list containing entities.
class InvoiceComparer : IEqualityComparer<Invoice>
{
public bool Equals(Invoice x, Invoice y)
{
// A
if (Object.ReferenceEquals(x, y)) return true;
// B
if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false;
// C
return x.TxnID == y.TxnID;
}
public int GetHashCode(Invoice obj)
{
if (Object.ReferenceEquals(obj, null)) return 0;
return obj.TxnID2.GetHashCode();
}
}
Why does Distinct require a comparer as opposed to a Func<T,T,bool>?
Are (A) and (B) anything other than optimizations, and are there scenarios when they would not act the expected way, due to subtleness in comparing references?
If I wanted to, could I replace (C) with
return GetHashCode(x) == GetHashCode(y)
So it can use hashcodes to be O(n) as opposed to O(n2)
(A) is an optimization.
(B) is necessary; otherwise, it would throw an NullReferenceException.
If Invoice is a struct, however, they're both unnecessary and slower.
No. Hashcodes are not unique
A is a simple and quick way to ensure that both objects located at the same memory address so both references the same object.
B - if one of the references is null - obviuosly it does not make any sense doing equality comparision
C - no, sometimes GetHashCode() can return the same value for different objects (hash collision) so you should do equality comparison
Regarding the same hash code value for different objects, MSDN:
If two objects compare as equal, the GetHashCode method for each
object must return the same value. However, if two objects do not
compare as equal, the GetHashCode methods for the two object do not
have to return different values.
Distinct() basically works on the term "not equal". therefore, if your list contains non-primitiv types, you must implement your own EqualityComparer.
At A, you check out whether the objects are identical or not. If two objects are equal, they don't have to be identical, but if they are identical, you can be sure that they are equal. So the A part can increase the method's effectivity in some cases.
So, I'm doing an assignment for my C# class and I have a list of objects. These objects have an 'int rand' field which is assigned a random number. I then wanted to re-sort the objects in the list based on this rand field.
I found this article:
http://www.developerfusion.com/code/5513/sorting-and-searching-using-c-lists/
And it helped. I modified this line to fit my code:
people.Sort(delegate(Person p1, Person p2) { return p1.age.CompareTo(p2.age); });
And it does what I want.
What I want to know is: how does it work? That looks very confusing to me.
In fact Sort Method should sort base on some comparison, in your current code you passed comparison as delegate, you can also embed it in class definition to reduce code complexity, In fact it just needed to implement IComparable for your Person class:
public class Person : IComparable
{
public int age { get; set; }
public int CompareTo(object obj)
{
var person = obj as Person;
if (person != null)
{
if (age > person.age)
return 1;
else if (age == person.age)
return 0;
return -1;
}
return 1;
}
}
then simply use sort without delegates.
If you use Lambda notation with it gets a little easier to read IMO:
people.Sort((p1, p2) => p1.age.CompareTo(p2.age));
When you use the sort method you are sorting the list, but the method needs to know what to sort by, and this is where the delegation becomes handy as you can use the sort method to specify any sorting you want. You are looking at person 1 and person 2 and are ordering by age. If you wanted to sort by something else like a Name (If you had a Name property), you would write it as:
people.Sort((p1, p2) => string.Compare(p1.Name, p2.Name));
the list will use the function passed in (the return p1.age.CompareTo(p2.age); part) to compare the different objects in the list. It basically allows you to "teach" the list how you want the items compared.
The list will call your function, passing in 2 instances of the class that should be compared. you return -1 to say the 1st is less than the 2nd, 0 to say they are equal, and 1 to say the 2nd is greater. Your example just passes the call on to the built in comparison (that returns the same -1, 0, 1 pattern) for whatever type the age variable is, most likely an integer.
In order to sort, you need to be able to figure out if one item goes before or after another thing. It makes sense that this "comparator" would be a function or method as it compartmentalizes the knowledge away.
If you look at the documentation for CompareTo you'll notice that it's intended to return -1 (B goes before A), 0, (A and B are equal) or 1 (B goes after A).
The delegate keyword in this instance creates an anonymous function which is used as the comparator, and the body of that function calls CompareTo to compare the age property of the two people involved and return the result.
The result of calling this method on every potential pair of items (or some subset - I'm not sure exactly how Sort is implemented) is then used internally by the Sort method to figure out where to place the resulting item (in front of or behind of p2) in this example.
List.Sort uses Quick sort algo to sort the list. The worst case complexity is O(n ^ 2). Which means, in worst case, the delegate you provided will be called n ^ 2 times, where n is the number of items in list. Delegate is like pointer to function.
It passes the two objects it wishes to compare, and the delegate you provided will return with -1, 0, or 1. If its -1 it means that p1 is lesser then p2, if 0 it means both objects are same, if 1 it means p1 is greater then p2. So, in the end, the delegate you provide and the values it returns will decide whether the list contains objects in descending or ascending order.
Lets divide the problem so that you can understand each piece separately:
The Sort method:
This one takes a delegate, that contains the "how to" compare two elements of the list.
As soon as you teach the list how to compare to elements, then it can sort all elements, by comparing them in pairs.
The inline delegate
A inline delegate is the declaration of method, that does something.
delegate(Person p1, Person p2) { return p1.age.CompareTo(p2.age);
This delegate is telling how to compare two Person objects. You are telling this to the compiler: to compare p1 with p2, you should compare p1.age with p2.age.
Joining things
The following line of code contains both elements, the sort method, and the "how to" compare two People objects.
people.Sort(delegate(Person p1, Person p2) { return p1.age.CompareTo(p2.age); });
So now it knows how to sort the list.
Given the following class
public class Foo
{
public int FooId { get; set; }
public string FooName { get; set; }
public override bool Equals(object obj)
{
Foo fooItem = obj as Foo;
if (fooItem == null)
{
return false;
}
return fooItem.FooId == this.FooId;
}
public override int GetHashCode()
{
// Which is preferred?
return base.GetHashCode();
//return this.FooId.GetHashCode();
}
}
I have overridden the Equals method because Foo represent a row for the Foos table. Which is the preferred method for overriding the GetHashCode?
Why is it important to override GetHashCode?
Yes, it is important if your item will be used as a key in a dictionary, or HashSet<T>, etc - since this is used (in the absence of a custom IEqualityComparer<T>) to group items into buckets. If the hash-code for two items does not match, they may never be considered equal (Equals will simply never be called).
The GetHashCode() method should reflect the Equals logic; the rules are:
if two things are equal (Equals(...) == true) then they must return the same value for GetHashCode()
if the GetHashCode() is equal, it is not necessary for them to be the same; this is a collision, and Equals will be called to see if it is a real equality or not.
In this case, it looks like "return FooId;" is a suitable GetHashCode() implementation. If you are testing multiple properties, it is common to combine them using code like below, to reduce diagonal collisions (i.e. so that new Foo(3,5) has a different hash-code to new Foo(5,3)):
In modern frameworks, the HashCode type has methods to help you create a hashcode from multiple values; on older frameworks, you'd need to go without, so something like:
unchecked // only needed if you're compiling with arithmetic checks enabled
{ // (the default compiler behaviour is *disabled*, so most folks won't need this)
int hash = 13;
hash = (hash * 7) + field1.GetHashCode();
hash = (hash * 7) + field2.GetHashCode();
...
return hash;
}
Oh - for convenience, you might also consider providing == and != operators when overriding Equals and GetHashCode.
A demonstration of what happens when you get this wrong is here.
It's actually very hard to implement GetHashCode() correctly because, in addition to the rules Marc already mentioned, the hash code should not change during the lifetime of an object. Therefore the fields which are used to calculate the hash code must be immutable.
I finally found a solution to this problem when I was working with NHibernate.
My approach is to calculate the hash code from the ID of the object. The ID can only be set though the constructor so if you want to change the ID, which is very unlikely, you have to create a new object which has a new ID and therefore a new hash code. This approach works best with GUIDs because you can provide a parameterless constructor which randomly generates an ID.
By overriding Equals you're basically stating that you know better how to compare two instances of a given type.
Below you can see an example of how ReSharper writes a GetHashCode() function for you. Note that this snippet is meant to be tweaked by the programmer:
public override int GetHashCode()
{
unchecked
{
var result = 0;
result = (result * 397) ^ m_someVar1;
result = (result * 397) ^ m_someVar2;
result = (result * 397) ^ m_someVar3;
result = (result * 397) ^ m_someVar4;
return result;
}
}
As you can see it just tries to guess a good hash code based on all the fields in the class, but if you know your object's domain or value ranges you could still provide a better one.
Please don´t forget to check the obj parameter against null when overriding Equals().
And also compare the type.
public override bool Equals(object obj)
{
Foo fooItem = obj as Foo;
if (fooItem == null)
{
return false;
}
return fooItem.FooId == this.FooId;
}
The reason for this is: Equals must return false on comparison to null. See also http://msdn.microsoft.com/en-us/library/bsc2ak47.aspx
How about:
public override int GetHashCode()
{
return string.Format("{0}_{1}_{2}", prop1, prop2, prop3).GetHashCode();
}
Assuming performance is not an issue :)
As of .NET 4.7 the preferred method of overriding GetHashCode() is shown below. If targeting older .NET versions, include the System.ValueTuple nuget package.
// C# 7.0+
public override int GetHashCode() => (FooId, FooName).GetHashCode();
In terms of performance, this method will outperform most composite hash code implementations. The ValueTuple is a struct so there won't be any garbage, and the underlying algorithm is as fast as it gets.
Just to add on above answers:
If you don't override Equals then the default behavior is that references of the objects are compared. The same applies to hashcode - the default implmentation is typically based on a memory address of the reference.
Because you did override Equals it means the correct behavior is to compare whatever you implemented on Equals and not the references, so you should do the same for the hashcode.
Clients of your class will expect the hashcode to have similar logic to the equals method, for example linq methods which use a IEqualityComparer first compare the hashcodes and only if they're equal they'll compare the Equals() method which might be more expensive to run, if we didn't implement hashcode, equal object will probably have different hashcodes (because they have different memory address) and will be determined wrongly as not equal (Equals() won't even hit).
In addition, except the problem that you might not be able to find your object if you used it in a dictionary (because it was inserted by one hashcode and when you look for it the default hashcode will probably be different and again the Equals() won't even be called, like Marc Gravell explains in his answer, you also introduce a violation of the dictionary or hashset concept which should not allow identical keys -
you already declared that those objects are essentially the same when you overrode Equals so you don't want both of them as different keys on a data structure which suppose to have a unique key. But because they have a different hashcode the "same" key will be inserted as different one.
It is because the framework requires that two objects that are the same must have the same hashcode. If you override the equals method to do a special comparison of two objects and the two objects are considered the same by the method, then the hash code of the two objects must also be the same. (Dictionaries and Hashtables rely on this principle).
We have two problems to cope with.
You cannot provide a sensible GetHashCode() if any field in the
object can be changed. Also often a object will NEVER be used in a
collection that depends on GetHashCode(). So the cost of
implementing GetHashCode() is often not worth it, or it is not
possible.
If someone puts your object in a collection that calls
GetHashCode() and you have overrided Equals() without also making
GetHashCode() behave in a correct way, that person may spend days
tracking down the problem.
Therefore by default I do.
public class Foo
{
public int FooId { get; set; }
public string FooName { get; set; }
public override bool Equals(object obj)
{
Foo fooItem = obj as Foo;
if (fooItem == null)
{
return false;
}
return fooItem.FooId == this.FooId;
}
public override int GetHashCode()
{
// Some comment to explain if there is a real problem with providing GetHashCode()
// or if I just don't see a need for it for the given class
throw new Exception("Sorry I don't know what GetHashCode should do for this class");
}
}
Hash code is used for hash-based collections like Dictionary, Hashtable, HashSet etc. The purpose of this code is to very quickly pre-sort specific object by putting it into specific group (bucket). This pre-sorting helps tremendously in finding this object when you need to retrieve it back from hash-collection because code has to search for your object in just one bucket instead of in all objects it contains. The better distribution of hash codes (better uniqueness) the faster retrieval. In ideal situation where each object has a unique hash code, finding it is an O(1) operation. In most cases it approaches O(1).
It's not necessarily important; it depends on the size of your collections and your performance requirements and whether your class will be used in a library where you may not know the performance requirements. I frequently know my collection sizes are not very large and my time is more valuable than a few microseconds of performance gained by creating a perfect hash code; so (to get rid of the annoying warning by the compiler) I simply use:
public override int GetHashCode()
{
return base.GetHashCode();
}
(Of course I could use a #pragma to turn off the warning as well but I prefer this way.)
When you are in the position that you do need the performance than all of the issues mentioned by others here apply, of course. Most important - otherwise you will get wrong results when retrieving items from a hash set or dictionary: the hash code must not vary with the life time of an object (more accurately, during the time whenever the hash code is needed, such as while being a key in a dictionary): for example, the following is wrong as Value is public and so can be changed externally to the class during the life time of the instance, so you must not use it as the basis for the hash code:
class A
{
public int Value;
public override int GetHashCode()
{
return Value.GetHashCode(); //WRONG! Value is not constant during the instance's life time
}
}
On the other hand, if Value can't be changed it's ok to use:
class A
{
public readonly int Value;
public override int GetHashCode()
{
return Value.GetHashCode(); //OK Value is read-only and can't be changed during the instance's life time
}
}
You should always guarantee that if two objects are equal, as defined by Equals(), they should return the same hash code. As some of the other comments state, in theory this is not mandatory if the object will never be used in a hash based container like HashSet or Dictionary. I would advice you to always follow this rule though. The reason is simply because it is way too easy for someone to change a collection from one type to another with the good intention of actually improving the performance or just conveying the code semantics in a better way.
For example, suppose we keep some objects in a List. Sometime later someone actually realizes that a HashSet is a much better alternative because of the better search characteristics for example. This is when we can get into trouble. List would internally use the default equality comparer for the type which means Equals in your case while HashSet makes use of GetHashCode(). If the two behave differently, so will your program. And bear in mind that such issues are not the easiest to troubleshoot.
I've summarized this behavior with some other GetHashCode() pitfalls in a blog post where you can find further examples and explanations.
As of C# 9(.net 5 or .net core 3.1), you may want to use records as it does Value Based Equality by default.
It's my understanding that the original GetHashCode() returns the memory address of the object, so it's essential to override it if you wish to compare two different objects.
EDITED:
That was incorrect, the original GetHashCode() method cannot assure the equality of 2 values. Though objects that are equal return the same hash code.
Below using reflection seems to me a better option considering public properties as with this you don't have have to worry about addition / removal of properties (although not so common scenario). This I found to be performing better also.(Compared time using Diagonistics stop watch).
public int getHashCode()
{
PropertyInfo[] theProperties = this.GetType().GetProperties();
int hash = 31;
foreach (PropertyInfo info in theProperties)
{
if (info != null)
{
var value = info.GetValue(this,null);
if(value != null)
unchecked
{
hash = 29 * hash ^ value.GetHashCode();
}
}
}
return hash;
}