Is there a practical difference between .All() and .TrueForAll() when operating on a List? I know that .All() is part of IEnumerable, so why add .TrueForAll()?
From the docs for List<T>.TrueForAll:
Supported in: 4, 3.5, 3.0, 2.0
So it was added before Enumerable.All.
The same is true for a bunch of other List<T> methods which work in a similar way to their LINQ counterparts. Note that ConvertAll is somewhat different, in that it has the advantage of knowing that it's working on a List<T> and creating a List<TResult>, so it gets to preallocate whatever it needs.
TrueForAll existed in .NET 2.0, before LINQ was in .NET 3.5.
See: http://msdn.microsoft.com/en-us/library/kdxe4x4w(v=VS.80).aspx
TrueForAll appears to be specific to List, while All is part of LINQ.
My guess is that the former dates back to the .NET 2 days, while the latter is new in .NET 3.5.
Sorry for digging this out, but I came across this question and have seen that the actual question about differences is not properly answered.
The differences are:
The IEnumerable.All() extension method does an additional check for the extended object (in case it is null, it throws an exception).
IEnumerable.All() may not check elements in order. In theory, it would be allowed by specification to have the items to check in a different order every call. List.TrueForAll() specifies in the documentation that it will always be in the order of the list.
The second point is because Enumerable.All must use a foreach or the MoveNext() method to iterate over the items, while List.TrueForAll() internally uses a for loop with list indices.
However, you can be pretty sure that also the foreach / MoveNext() approach will return the elements in the order of the list entries because a lot of programs expect that and would break if this would be changed in the future.
From a performance point of view, List.TrueForAll() should be faster because it has one check less and for on a list is cheaper than foreach. However, usually the compiler does a good job and optimizes here a lot, so that there will probably (almost) no difference measurable in the end.
Conclusion: List.TrueForAll() is the better choice in theory. But practically it makes no difference.
Basically, because this method existed before Linq did. TrueForAll on a List originated in Framework 2.0.
TrueForAll is not an extension method and in the framework from version 2.
Related
I was reading about C#'s ImmutableSortedDictionary in System.Collections.Immutable and thinking about how to apply it in my program. I quite like C++'s lower_bound and upper_bound (see here), and I was rather expecting to see something of the sort for range lookups. However, similar methods seem to be strangely absent from the documentation. Am I missing something? Or does MS truly provide a sorted dictionary without efficient access to the sorted ranges? That doesn't exactly seem like something one could do on an IEnumerable of the keys as say an extension method, so I'm a bit puzzled I'm not seeing something provided directly by the collection.
It is irritating that the available built-in collections are not offering a full set of features (like the SortedDictionary lacking a BinarySearch method), forcing us to search for third-party solutions (like the C5 library).
In your case instead of an ImmutableSortedDictionary you could probably use a ImmutableSortedSet, embedding the values in the keys and using an appropriate comparer. At least the API of this class contains the properties Min and Max.
While browsing the .net core source i notice that even in source form iterator classes are manually implemented instead of relying on the yield statement and auto IEnumerable implementation.
You can see at this line for example the decalartion and implementation of the where iterator https://github.com/dotnet/corefx/blob/master/src/System.Linq/src/System/Linq/Enumerable.cs#L168
I'm assuming if they went through the trouble of doing this instead of a simple yield statement there has to be some benefit but i can't immediately see which, it seems pretty similar to what i remember the compiler does automatically from i read back eric lippert's blog a few years back and i remember when i naively reimplemented LINQ with yield statements in it's early days to better understand it the performance profile was similar to that of .NET version.
It piqued my curiosity but it's also an actually important question as i'm in the middle of a fairly big data - in memory project and if i'm missing something obvious that makes this approach better i would love to know the tradeoffs.
Edit : to clarify, i do understand why they can't just yield in the where method (different enumeration for different container types), what i don't understand is why they implement the iterator itself (that is, instead of forking to diferent iterators, forking to diferent methods, iterating diferently based on type, and yielding to have the auto implemented state machine instead of manual case 1 goto 2 case 2 etc).
One possible reason is that the specialized iterators perform a few optimizations, like combining selectors and predicates and taking advantage of indexed collections.
The usefulness of these is that some sources can be iterated in a more optimal way than what the compiler magic for yield would generate. And by creating these custom iterators, they can pass this extra information about the source to subsequent LINQ operations in a single chain (instead of making that information available only to the first operation in the chain). Thus, all Where and Select operations (that don't have anything else between them) can be executed as one.
I still use Wintellect's PowerCollections library, even though it is aging and not maintained because it did a good job covering holes left in the standard MS Collections libraries. But LINQ and C# 4.0 are poised to replace PowerCollections...
I was very happy to discover System.Linq.Lookup because it should replace Wintellect.PowerCollections.MultiDictionary in my toolkit. But Lookup seems to be immutable! Is that true, can you only created a populated Lookup by calling ToLookup?
Yes, you can only create a Lookup by calling ToLookup. The immutable nature of it means that it's easy to share across threads etc, of course.
If you want a mutable version, you could always use the Edulinq implementation as a starting point. It's internally mutable, but externally immutable - and I wouldn't be surprised if the Microsoft implementation worked in a similar way.
Personally I'm rarely in a situation where I want to mutate the lookup - I would prefer to perform appropriate transformations on the input first. I would encourage you to think in this way too - I find myself wishing for better immutability support from other collections (e.g. Dictionary) more often than I wish that Lookup were mutable :)
That is correct. Lookup is immutable, you can create an instance by using the Linq ToLookup() extension method. Technically even that fact is an implementation detail since the method returns an ILookup interface which in the future might be implemented by some other concrete class.
Following the suggestions of FxCop and my personal inclination I've been encouraging the team I'm coaching to use ReadOnlyCollections as much possible. If only so that recipients of the lists can't modify their content. In their theory this is bread & butter. The problem is that the List<> interface is much richer exposing all sorts of useful methods. Why did they make that choice?
Do you just give up and return writable collections? Do you return readonly collections and then wrap them in the writable variety? Ahhhhh.
Update:
Thanks I'm familiar with the Framework Design Guideline and thats why the team is using FxCop to enforce it. However this team is living with VS 2005 (I know, I know) and so telling them that LINQ/Extension methods would solve their problems just makes them sad.
They've learned that List.FindAll() and .FindFirst() provide greater clarity than writing a foreach loop. Now I'm pushing them to use ReadOnlyCollections they lose that clarity.
Maybe there is a deeper design problem that I'm not spotting.
-- Sorry the original post should have mentioned the VS2005 restriction. I've lived with for so long that I just don't notice.
Section 8.3.2 of the .NET Framework Design Guidelines Second Edition:
DO use ReadOnlyCollection<T>, a subclass of ReadOnlyCollection<T>, or in rare cases IEnumerable<T> for properties or return values representing read-only collections.
We go with ReadOnlyCollections to express our intent of the collection returned.
The List<T> methods you speak of were added in .NET 2.0 for convenience. In C# 3.0 / .NET 3.5, you can get all those methods back on ReadOnlyCollection<T> (or any IEnumerable<T>) using extension methods (and use LINQ operators as well), so I don't think there's any motivation for adding them natively to other types. The fact that they exist at all on List is just a historical note due to the presence of extension methods being available now but weren't in 2.0.
First off, ReadOnlyCollection<T> does implement IEnumerable<T> and IList<T>. With all of the extension methods in .NET 3.5 and LINQ, you have access to nearly all of the functionality from the original List<T> class in terms of querying, which is all you should do with a ReadOnlyCollection<T> anyways.
That being said, your initial question leads me to make some suggestions...
Returning List<T> is bad design, so it shouldn't be a point of comparison. List<T> should be used for implementation, but for the interface, IList<T> should be returned. The Framework Design Guidelines specifically state:
"DO NOT use ArrayList or List<T> in public APIs." (Page 251)
If you take that into consideration, there is absolutely no disadvantage to ReadOnlyCollection<T> when compared to List<T>. Both of these classes implement IEnumerable<T> and IList<T>, which are the interfaces that should be returned anyways.
I don't have any insight as to why they weren't originally added. But now that we have LINQ I certainly see no reason to add them in future versions of the language. The methods you mentioned can easily be written in a LINQ query today. These days I just use the LINQ queries for pretty much everything. I actually more often get annoyed with List<T> having those methods because it conflicts with extension methods I write against IEnumerable<T>.
I think Jeff's answer kinda contains the answer you need; instead of ReadOnlyCollection<T>, return a subclass of it... one that you implement yourself to include the methods that you'd like to use without upgrading to VS2008/LINQ.
I have an object in C# with lets say 20 properties and it's part of a datacontract. I also have another business entity with similar properties, which I want to populate from the response object. Is there any way to do this other than assigning each property of one object to the corresponding properties of the other?
Yes, take a look at Automapper
MiscUtil has an answer to this (PropertyCopy) that uses Expression (.NET 3.5) and a static field to cache the compiled delegate (so there is negligible cost per invoke):
DestType clone = PropertyCopy<DestType>.CopyFrom(original);
If you are using 2.0, then probably reflection would be your friend. You can use HyperDescriptor to improve the performance if you need.
Reflection is an option if you want to do it in an automated manner, provided the property names are easily mappable between the objects.
Automapper is worth a try, but in the end, I decided it wasn't for me. The big problem with those sorts of tools is you incur a great deal of runtime overhead each and every time a mapping occurs. I asked this same question last week and I ended up rolling my own solution (look at the accepted answer). You're free to modify the source I provided, I make no claims as to it's effectiveness, suitability, performance, you-break-it-you-get-to-keep-the-pieces, etc., but it works well enough for me to create design time object to object mapping.
C# Object Clone Wars might be a good starting point.