i have list of objects i need to sort the list based on object's property1 and i need to sort again the resultant list with object's property2 without loosing the grouping done in first sorting ....
for ex:
obj has 2 property name and location
i need to have a final list of objects which has been sorted with region and objects of same region should be sorted by name...
(Assuming you don't have LINQ available to you, which makes this trivial.)
If you look in MiscUtil, you'll find two useful classes: ProjectionComparer and LinkedComparer (IIRC).
ProjectionComparer basically implements the LINQ "order by" concept - you specify how to convert a source element to a key value, and the comparer will order by those key values.
LinkedComparer takes two comparers and returns a new comparer which uses the "primary" comparer first, and the "secondary" comparer if values are equal with respect to the primary one.
Create two projection comparers (one for each property) and then a linked comparer with the two of them, then pass that to List<T>.Sort. Let me know if you need a full code sample, but it would be something like (using C# 3):
var comparer = new LinkedComparer<Foo>
(ProjectionComparer<Foo>.Create(x => x.FirstProperty),
ProjectionComparer<Foo>.Create(x => x.SecondProperty));
(In C# 2 you could use anonymous methods, they'd just be a bit more long-winded.)
Sounds like you want to use LINQ's orderby and thenby syntax.
A List has a Sort method which takes a Comparision delegate as an argument.
There are also overloads where you can pass in your own comparer.
So, you can write a class which implements IComparer. Then, in the implementation of this class, you write the code where you compare the 2 objects on the properties you want.
Related
I am having a hard time grouping a dbset (EntityFramework) by two fields and sending that output to a strongly typed view.
When I use an anonymous type for the composite key I get the right output. A list containing one item and that item in turn has two or more grouping items.
Now if I use a class instead I get a list of two items and in turn each item has one grouping item.
var output = context.Transfers.GroupBy(t=> new { t.TNumber, t.Type}).ToList();
var output2 = context.Transfers.AsEnumerable()
.GroupBy(t => new OTSpecs(t.TNumber, t.Type)).ToList();
OTSpecs is just a simple class, with those public fields and a parameter constructor.
I need to add the AsEnumerable() otherwise I get a System.NotSupportedException Only parameterless constructors and initializers are supported in LINQ to Entities
Also because I need to define the model in the view like this
#model IEnumerable<IGrouping<OTSpecs, Transfer>>
unless of course it is possible to replace OTSpecs in that line with the anonymous type. But I don't know how.
My question is why those lines of code produce a different output?
Is it possible to define the model in the view replacing the OTSpecs for a anonymous type?
Anonymous types implement equality comparison which compares all their properties. So when you are using anonymous type as a key, linq is able to identify that two key objects are same and should be grouped together.
Your custom object, I suspect, does not implement that stuff, so for it just general object comparison is used, which just compares references. Two key objects have difference references - thus different groups.
To fix this, you may need to either pass in equality comparer, or implement Equals in your class OTSpecs.
Out of curiosity: What comparer is used when sorting a bunch of objects using the following extension method?
OrderBy(x=> x)
Background: I have to check wether two ISet< T > instances contain the same elements and considered to use the
bool setsEqual = MySet.SequenceEqual(OtherSet);
method. As the order of those elements contained in the sets are not defined and may differ, a SequenceEqual would fail in those cases where the internal order is not the same. So i would have to explictly define an order. As the order algo for itself is completely irrelevant as long as it´s stable, i just used an "Identity" lambda expression:
bool setsEqual = MySet.OrderBy(x => x).SequenceEqual(OtherSet.OrderBy(x => x);
But what does "Compare the objects themselves" mean to the code? As this OrderBy extension method is a generic one, there must be a default compare algo in place that is able to sort objects without knowing anything more about it, and that would mean a comparison for sorting had to be delegated to the type of the set elements itself. Is there an interface that the elements´ type would have to support, or is there a default comparer (may be comparing internal memory addresses of objects) in place?
To answer the question of sorting: sorting uses IComparable<T> or IComperable if that isn't implemented. The IComperable interfaces force you to implement a int CompareTo(object) method (or int CompareTo(T) method if you used the typed version).
The order of your elements is determined by the sign of the int. The value returned is interpreted as follows:
0: the two objects are equivalent (i.e. the same)
-1: the compared object precedes this object (i.e. comes before this object)
1: the compared object follows this object (i.e. comes after this object)
The actual value is ignored, the sign is all that matters. If you implement your own IComparable interface, you have to choose the semantics for sort order.
Many objects already implement IComparable already, like all your numbers, strings, etc. You'll need to implement it explicitly if you need to sort objects you've created yourself. It's not a bad practice if you intend those objects to be displayed in a list on screen at all.
As to your specific case, where you just need to determine if a set and another IEnumerable are equivalent, then you would use the ISet<T>.SetEquals(IEnumerable<T>) method which is implemented in the standard library set implementations. Sets, by definition, only guarantee the values are unique, so as long as the number of elements are the same, you only need to detect that all the elements in one IEnumerable can be found in the set.
The method used the IComparable<T>-or the IComparable-interface depending on which of both are implemented. If none is implemented the order is arbitrary.
However you won´t need to order you instances before comparing the sets. Simply loop one set and check if all of its elements are contained in the other set. Or use this:
var areEqual = firstSet.All(x => secondSet.Contains(x)) && secondSet.All(x => firstSet.Contains(x));
Or even simpler:
var areEqual = !firstSet.Except(secondSet).Any() && !secondSet.Except(firstSet).Any();
Both ways perform much faster than your appraoch as the iteration of elements stops when the first element is found that does not fit. Using OrderBy you´d loop all elements, regardless if there was already a mismatch.
Unlike for equality, there's no 'default' comparer for objects in general.
It seems that Comparer<TKey>.Default always returns a comparer, for any type TKey. If no sensible comparison method can be determined, you get an exception, but only once the comparer is used.
At least one object must implement IComparable.
This question already has answers here:
Why there is two completely different version of Reverse for List and IEnumerable?
(3 answers)
Closed 7 years ago.
Why IEnumerable<T>.Reverse() returns the reversed collection with the original collection and List<T> reverses the original collection itself? This is somewhat confusing to me since List<T> inherits from IEnumerable<T>.
Because they're two different methods.
List<T>.Reverse is an instance method on List<T>. This modifies the List in place.
Enumerable.Reverse<T> is an extension method on IEnumerable<T>. This creates a new sequence that enumerates the source sequence in reverse order.
You can use the latter on a List<T> by calling as a static method:
var list = new List<string>{"1","2"};
var reversed = Enumerable.Reverse(list)
Or by casting to IEnumerable<T>:
IEnumerable<string> list = new List<string>{"1","2"};
var reversed = list.Reverse();
In both these cases, list remains unchanged and reversed when enumerated returns {"2","1"}.
Conceptually, this may be because IEnumerable is used when you want to represent an immutable collection of items. That is, you only want to read the items in the list, but not add/insert/delete or otherwise change the collection. In this view of the data, returning a new IEnumerable in a different order is the expected result. When you use a List, you expected to be able to add/insert/delete or otherwise mutate the collection, so a Reverse method that changes the order of the original list would be expected in that context.
As others have noted IEnumerable is an interface and List is a class. List implements IEnumerable.
So,
IEnumerable<String> names = new List<String>();
var reversedNames = names.Reverse();
gets you a second list, reversed.
Whereas,
List<String> names = new List<String>();
names.Reverse();
Reverses your original list. I hope this makes sense.
Reverse on IEnumerable<T> is part of Linq and is added as extension method (Enumerable.Reverse<T>). It was added after Reverse was implemented for List<T>.
IEnumerable<T> isn't a collection of objects either. It just tells the consumer how to get at those items.
These are different methods, remember. Essentially different, with nothing common but a name.
void List.Reverse is a method of List instance only, and it does in place reversal of list or part of it.
IEnumerable Enumerable.Reverse is an extension method (btw IList also has it!) creates a new enumerable with order reversed.
There is no such thing as IEnumerable<T>.Reverse() It's Enumerable.Reverse<T>(this IEnumerable<T>) and it's extension method from Linq applied to all IEnumerable<>.
Now that we've established when they come from, it's easy to understand why they are so different. Linq adds methods for creating "processing streams", and it's achieved by creating new instance every time.
List<T>.Reverse() is a method of List and like all it's methods (eg Add) directly modifies the instance.
IEnumerable<T> has an underlying implementation, which could be List<T>, Hashtable<T> or something else that implements IEnumerable.
List<T> itself is the implementation, so you can directly modify that instance.
And as everyone else has mentioned, one is an extension method and part of Linq, and one is implemented directly on the type.
I have a list of objects and I want to see if a particular object is in this list. When I use the Contains() or IndexOf() methods on the list i get incorrect results however, since this uses the Equals() method of the object which is not what i need. I want to find a particular instance and not an object that seems to have equal property values.
If you wish to match the references, you can use:
if (object.ReferenceEquals(item1, item2))
...
to force it to compare references instead of using Equals()
Or:
int index = list.FindIndex(item=>ReferenceEquals(item, target));
(See the MSDN Documentation for List.FindIndex() for more details.)
Can you use the hashcode?
list.where(w => w.GetHashCode() == object.GetHashCode())
I need a collection that exposes [] operator, contains only unique objects, and are generic. Anyone can help?
Dictionary(Of TKey, TValue) Class represents a collection of keys and values.
HashSet<T>
It depends what you mean by "exposes the [] operator."
If you want to be able to access objects in a unique collection by some arbitrary key, then use a Dictionary<string key, object value>.
If you want to be able to create a list of unique objects which permits access by an ordinal index, in the order in which objects were added, you will need to roll something of your own. I am not aware of any framework class that offers both uniqueness like a HashSet<T> and also allows access to objects in the order in which they were added, like a List<T>. SortedSet<T> almost does it, but does not have indexer access - so while it does maintain order, it does not allow access using that order except through enumeration. You could use Linq extension method ElementAt to access the element at a particular ordinal index, but performance would be very bad since this method works by iteration.
You could use also Dictionary<int key, object value> but you will still have to maintain the index yourself, and if anything is ever removed, you'd have a hole in your list. This would be a good solution if you never had to remove elements.
To have both uniqueness and access by index, and also be able to remove elements, you need a combination of a hash table and an ordered list. I created such a class recently. I don't think this is necessarily the most efficient implementation since it does its work by keeping two copies of the lists (one as a List<T> and one as a HashSet<T>).
In my situation, I valued speed over storage efficiency, since the amount of data wasn't large. This class offers the speed of a List<T> for indexed access and the speed of a HashTable<T> for element access (e.g. ensuring uniqueness when adding) at the expense of twice the storage requirements.
An alternative would be to use just a List<T> as your basis, and verify uniqueness before any add/insert operation. This would be more memory efficient, but much slower for add/insert operations because it doesn't take advantage of a hash table.
Here's the class I used.
http://snipt.org/xlRl
The HashSet class should do the trick. See HashSet(Of T) for more information. If you need them to maintain a sorted order, the SortedSet should do the trick. See SortedSet(Of T) for more information about that class.
If you're looking to store unique objects (entities, for example) while exposing a [], then you want to use the KeyedCollection class.
MSDN KeyedCollection
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
// This class represents a very simple keyed list of OrderItems,
// inheriting most of its behavior from the KeyedCollection and
// Collection classes. The immediate base class is the constructed
// type KeyedCollection<int, OrderItem>. When you inherit
// from KeyedCollection, the second generic type argument is the
// type that you want to store in the collection -- in this case
// OrderItem. The first type argument is the type that you want
// to use as a key. Its values must be calculated from OrderItem;
// in this case it is the int field PartNumber, so SimpleOrder
// inherits KeyedCollection<int, OrderItem>.
//
public class SimpleOrder : KeyedCollection<int, OrderItem>
{
// The parameterless constructor of the base class creates a
// KeyedCollection with an internal dictionary. For this code
// example, no other constructors are exposed.
//
public SimpleOrder() : base() {}
// This is the only method that absolutely must be overridden,
// because without it the KeyedCollection cannot extract the
// keys from the items. The input parameter type is the
// second generic type argument, in this case OrderItem, and
// the return value type is the first generic type argument,
// in this case int.
//
protected override int GetKeyForItem(OrderItem item)
{
// In this example, the key is the part number.
return item.PartNumber;
}
}