Should I cast in my lambda or cast the IEnumerable? - c#

In my project I have a MyClass which implements IMyClass. I need to return a list of IMyClass by transforming a list of other items. For simplicity's sake, assume that I can create a MyClass just by passing another item into its constructor, i.e. new MyClass(item).
Consider the following two lines, which (as far as I know) produce the same result:
var option1 = items.Select(item => new MyClass(item)).Cast<IMyClass>().ToList()
var option2 = items.Select(item => new MyClass(item) as IMyClass).ToList()
It seems to me that option #1 would require a double enumeration, once to cast all the items to my interface and once to generate the list. If I'm right then option #2 would be smarter. However, I've never seen any code using something like option #2, and I tend to assume that I'm not smart enough to come up with something clever that the rest of the C# community did not.
On a side note, I think option #2 is more aesthetically pleasing, but that's just me.
My question is: is my option #2 a better idea like I think it is? Are there are any gotchas I'm missing or other reasons why I'd want to stick with option #1? Or am I perhaps comparing two stupid ideas when there is a smarter third one that I'm missing completely?

I'd go for option 3:
var option3 = items.Select<Foo, IMyClass>(item => new MyClass(item))
.ToList()
Alternatively, don't use as but just cast normally:
var option4 = items.Select(item => (IMyClass) new MyClass(item))
.ToList()
Both of these seem cleaner than using Cast.
Oh, and as of C# 4 with .NET 4 (due to covariance), you could put a type argument on the ToList call instead:
var option5 = items.Select(item => new MyClass(item))
.ToList<IMyClass>()

It seems to me that option #1 would require a double enumeration
This is not true. In both cases, the items collection is only enumerated when you get to ToList().
The line
var option1 = items.Select(item => new MyClass(item)).Cast<IMyClass>().ToList()
is equivalent to
var option1 = items.Select(item => new MyClass(item)).Select(x => (IMyClass)x).ToList()
The only difference between the two is that the first one requires two function calls per item (unless C# inlines the lambdas somehow, which I don't believe is the case) while the second option requires only one.
Personally, I'd go with the second one as a matter of style.

Which one you use is a matter of preference, something we really cannot answer for you.
But your intuition if sort-of correct that Cast adds a second layer of iteration to your loop. It's very minor, and I doubt it will produce any measurable difference in performance, but the Cast method returns a new IEnumerable object that basically does this:
foreach (object obj in source) yield return (TResult)obj;
The effect is mostly another level on the call stack; since it uses yield it will only iterate on demand, like most other IEnumerable methods. But it will have to return though two levels of iterator state instead of one. Whether that matters for you is something you'll need to measure for your own applications.
(Also note that, at least according to the reference source, it does an unsafe cast, which might throw an exception if the cast is invalid. That's another reason to prefer your option #2.)

You can always provide explicit type arguments to your Select
var option2 = items.Select<IItem,IMyClass>(item => new MyClass(item)).ToList();
where IItem is a type or interface to which items could be cast.

Related

Syntax alternatives to casting of dynamic objects

I have an implementation of DynamicDictionary where all of the entries in the dictionary are of a known type:
public class FooClass
{
public void SomeMethod()
{
}
}
dynamic dictionary = new DynamicDictionary<FooClass>();
dictionary.foo = new FooClass();
dictionary.foo2 = new FooClass();
dictionary.foo3 = DateTime.Now; <--throws exception since DateTime is not FooClass
What I'd like is to be able to have Visual Studio Intellisense work when referencing a method of one of the dictionary entries:
dictionary.foo.SomeMethod() <--would like SomeMethod to pop up in intellisense
The only way I've found to do this is:
((FooClass)dictionary.foo).SomeMethod()
Can anyone recommend a more elegant syntax? I'm comfortable writing a custom implementation of DynamicDictionary with IDynamicMetaObjectProvider.
UPDATE:
Some have asked why dynamics and what my specific problem is. I have a system that lets me do something like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters).Validate((parameters) =>
{
//do some method validation on the parameters
return true; //return true for now
}).WithMessage("The parameters are not valid");
In this case the method SomeMethodWithParameters has the signature
public void SomeMethodWithParameters(int index, object target)
{
}
What I have right now for registering validation for individual parameters looks like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters).GetParameter("index").Validate((val) =>
{
return true; //valid
}).WithMessage("index is not valid");
What I'd like it to be is:
UI.Map<Foo>().Action<int, object(x => x.SomeMethodWithParameters).index.Validate((val) =>
{
return true;
}).WithMessage("index is not valid");
This works using dynamics, but you lose intellisense after the reference to index - which is fine for now. The question is is there a clever syntactical way (other than the ones metioned above) to get Visual Studio to recognize the type somehow. Sounds so far like the answer is "no".
It seems to me that if there was a generic version of IDynamicMetaObjectProvider,
IDynamicMetaObjectProvider<T>
this could be made to work. But there isn't, hence the question.
In order to get intellisense, you're going to have to cast something to a value that is not dynamic at some point. If you find yourself doing this a lot, you can use helper methods to ease the pain somewhat:
GetFoo(dictionary.Foo).SomeMethod();
But that isn't much of an improvement over what you've got already. The only other way to get intellisense would be to cast the value back to a non-dynamic type or avoid dynamic in the first place.
If you want to use Intellisense, it's usually best to avoid using dynamic in the first place.
typedDictionary["foo"].SomeMethod();
Your example makes it seem likely that you have specific expectations about the structure of your dynamic object. Consider whether there's a way to create a static class structure that would fulfill your needs.
Update
In response to your update: If you don't want to drastically change your syntax, I'd suggest using an indexer so that your syntax can look like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters)["index"].Validate((val) => {...});
Here's my reasoning:
You only add four characters (and subtract one) compared to the dynamic approach.
Let's face it: you are using a "magic string." By requiring an actual string, this fact will be immediately obvious to programmers who look at this code. Using the dynamic approach, there's nothing to indicate that "index" is not a known value from the compiler's perspective.
If you're willing to change things around quite a bit, you may want to investigate the way Moq plays with expressions in their syntax, particularly the It.IsAny<T>() method. It seems like you might be able to do something more along these lines:
UI.Map<Foo>().Action(
(x, v) => x.SomeMethodWithParameters(
v.Validate<int>(index => {return index > 1;})
.WithMessage("index is not valid"),
v.AlwaysValid<object>()));
Unlike your current solution:
This wouldn't break if you ended up changing the names of the parameters in the method signature: Just like the compiler, the framework would pay more attention to the location and types of the parameters than to their names.
Any changes to the method signature would cause an immediate flag from the compiler, rather than a runtime exception when the code runs.
Another syntax that's probably slightly easier to accomplish (since it wouldn't require parsing expression trees) might be:
UI.Map<Foo>().Action((x, v) => x.SomeMethodWithParameters)
.Validate(v => new{
index = v.ByMethod<int>(i => {return i > 1;}),
target = v.IsNotNull()});
This doesn't give you the advantages listed above, but it still gives you type safety (and therefore intellisense). Pick your poison.
Aside from Explict Cast,
((FooClass)dictionary.foo).SomeMethod();
or Safe Cast,
(dictionary.foo as FooClass).SomeMethod();
the only other way to switch back to static invocation (which will allow intellisense to work) is to do Implicit Cast:
FooClass foo = dictionary.foo;
foo.SomeMethod().
Declared casting is your only option, can't use helper methods because they will be dynamically invoked giving you the same problem.
Update:
Not sure if this is more elegant but doesn't involve casting a bunch and gets intellisense outside of the lambda:
public class DynamicDictionary<T>:IDynamicMetaObjectProvider{
...
public T Get(Func<dynamic,dynamic> arg){
return arg(this);
}
public void Set(Action<dynamic> arg){
arg(this);
}
}
...
var dictionary = new DynamicDictionary<FooClass>();
dictionary.Set(d=>d.Foo = new FooClass());
dictionary.Get(d=>d.Foo).SomeMethod();
As has already been said (in the question and StriplingWarrior answer) the C# 4 dynamic type does not provide intellisense support. This answer is provided merely to provide an explanation why (based on my understanding).
dynamic to the C# compiler is nothing more than object which has only limited knowledge at compile-time which members it supports. The difference is, at run-time, dynamic attempts to resolve members called against its instances against the type for which the instance it represents knows (providing a form of late binding).
Consider the following:
dynamic v = 0;
v += 1;
Console.WriteLine("First: {0}", v);
// ---
v = "Hello";
v += " World";
Console.WriteLine("Second: {0}", v);
In this snippet, v represents both an instance of Int32 (as seen in the first section of code) and an instance of String in the latter. The use of the += operator actually differs between the two different calls to it because the types involved are inferred at run-time (meaning the compiler doesn't understand or infer usage of the types at compile-time).
Now consider a slight variation:
dynamic v;
if (DateTime.Now.Second % 2 == 0)
v = 0;
else
v = "Hello";
v += 1;
Console.WriteLine("{0}", v);
In this example, v could potentially be either an Int32 or a String depending on the time at which the code is run. An extreme example, I know, though it clearly illustrates the problem.
Considering a single dynamic variable could potentially represent any number of types at run-time, it would be nearly impossible for the compiler or IDE to make assumptions about the types it represents prior to it's execution, so Design- or Compile-time resolution of a dynamic variable's potential members is unreasonable (if not impossible).

C# - Dynamic Type or Typecast?

Talking about performance, what is better in C#? Use Dynamic types or Typecast?
Like this (just an example, not the real implementation):
var list = new List<object>();
list.Add(new Woman());
list.Add(new Men());
list.Add(new Car());
....... in another code ....
var men = (Men)list[1];
men.SomeMenMethod();
Or this
var list = new List<dynamic>();
list.Add(new Woman());
list.Add(new Men());
list.Add(new Car());
....... in another code ....
var men = list[1];
men.SomeMenMethod();
The example is contrived as you know list[1] is a Men. So in that case either is identical.
Where dynamic becomes useful where you don't know the precise type, but you do know that at runtime that it will have a SomeMethod or property.
Of course if the type assumption is wrong, then the first throws an exception on the
var men = (Men)list[1]; line while the latter throws the exception on men.SomeMenMethod();
If possible, don't use either. Try to use type-safe solution that doesn't involve casting or dynamic.
If that's not possible, casting is better, because it's more clear, more type-safe (compiler can check that Men actually does have SomeMenMethod), the exception in case of an error is clearer and it won't work by accident (if you think you have Men, but you actually have Woman, which implements the same method, it works, but it's probably a bug).
You asked about performance. Nobody other than you can really know the performance of your specific case. If you really care about performance, always measure both ways.
But my expectation is that dynamic is going to be much slower, because it has to use something like a mini-compiler at runtime. It tries to cache the results after first run, but it still most likely won't be faster than a cast.

Why do LINQ operations lose the static type of the collection?

Regardless of the collection type I use as input, LINQ always returns IEnumerable<MyType> instead of List<MyType> or HashSet<MyType>.
From the MSDN tutorial:
int[] numbers = new int[7] { 0, 1, 2, 3, 4, 5, 6 };
// numQuery is an IEnumerable<int> <====== Why IEnumerable<int> instead of int[]?
var numQuery =
from num in numbers
where (num % 2) == 0
select num;
I wonder what's the rationale behind the decision to not preserve the collection type (like the element type), not the suggested work-around.
I know that toArray, toDictionary or toList exist, that's not the question.
Edit: How does the implementation in C# differ from Scala where it works without overriding the return types everywhere?
Linq standard query operators are defined on IEnumerable<T>, not on any other specific collection type. It is applicable to most collections because they all implement IEnumerable<T>.
The output of these standard query operators is a new IEnumerable<T> - if you want to support all collections available, you would need N implementations of those standard query operators, not just one.
Also many aspects of Linq, like lazy evaluation would not work well if forced to work an i.e. a List<T> as opposed to an IEnumerable<T>.
Short answer: C#'s type system is too primitive to support preservation of collection type in higher order functions (without code duplication, that is).
Long answer: Refer to my post here. (Actually it's about Java, but applies to C# as well.)
In the specific case of LINQ-to-objects, the LINQ-provider is a set of extension method defined in the System.Linq.Enumerable class. To be as general purpose as possible, they are defined to extend any type which implements the IEnumerable<T> interface. Therefore, when these methods are executed, they aren't aware of which concrete type the collection is; it only knows that it is IEnumerable<T>. Because of this, the operations you perform on a collection cannot be guaranteed to produce a collection of the same type. So instead, it produces the next best thing - a IEnumerable<T> object.
This was a very specific design choice. LINQ is intended to work at a very high level of abstraction. It exists so that you don't have to care about underlying sources of data, but can instead just say what you want and not care about such details.
LINQ takes an IEnumerable<> and returns one. Very natural. The reason IEnumerable is what LINQ takes- pretty much all collections implement it.
Many System.Linq.Enumerable methods are lazily evaluated. This allows you to create a query without executing it.
//a big array - have to start somewhere
int[] source = Enumerable.Range(0, 1000000).ToArray();
var query1 = source.Where(i => i % 2 == 1); //500000 elements
var query2 = query1.Select(i => i.ToString()); //500000 elements
var query3 = query2.Where(i => i.StartsWith("2"); //50000? elements
var query4 = query3.Take(5); //5 elements
string[] result = query4.ToArray();
This code calls ToString about 15 times. It allocates a single string[5] array.
If Where and Select returned arrays, this code would call ToString 500000 times and have to allocate two massive arrays. Thank goodness that is not the case.

Using iList to process items failing

It would be great to have shorthand for processing every item in a list, in this case saving.
SO intead of
var people = MakePeople();
foreach (var person in people)
{
session.Save(person);
}
we could use
var cards = MakeCards(deck);
cards.Select(session.Save);
But, that doesn't work.. Suggestions? Aggregate?
Select uses lazy evaluation, so it won't work.
You're looking for List<T>.ForEach, or its non-existent LINQ version.
It looks like Save() has no return value. To use the method with Select(), it needs to have a return value.
In general, it's a bad idea to use LINQ with methods that have side effects (like Save()). When there are side-effects, the first method is certainly preferred. If MakePeople returns a concrete List, you could also try:
var people = MakePeople();
people.ForEach((p) => { session.Save(p); });
List has a ForEach(Action<T> action) method.
You tried to create a Card -> Void mapping. Select is meant to convert one item to another type so you can convert a card collection to e.g. a cardnumber collection.
As the others have already mentioned List has a ForEach overload. But the general theme of the C# language designers (see Eric Lipperts blog) was to not provide such an extension method since foreach is clearer. It does not safe you an awful amount of typing so the benefit here is minmal.
For refernce you can create an extension method like that does what you want with 3 lines of code.
public static void ForEach<T>(this IEnumerable<T> sequence, Action<T> action)
{ // argument null checking omitted
foreach(T item in sequence) action(item);
}
Nitpick Corner: I did mean and write that (at least one influental) C# language designer did not want to introduce this extension method. Once it exists in the BCL it can be used by any .NET language.
Yours,
Alois Kraus

ICollection cast problem

Is there any solution to solve the problem with this cast example? I need to add a new element to the collection, so I have to solve it.
IEnumerable enumerable;
IEnumerable enumerable2;
enumerable = new ObservableCollection<Something>();
enumerable2 = new ObservableCollection<Object>();
ICollection<Object> try = (ICollection<Object>)enumerable; //Don’t work
ICollection<Object> try2 = (ICollection<Object>)enumerable2; //Work
Check out covariance and contravariance with generic parameters in C# 4. It might provide you with more information for future when faced with such problems.
I'm not sure 100% that this is what you mean, but have you tried this:
IEnumerable<Object> try3 = enumerable.Cast<Object>();
Ok, it didn't work, how about dynamic typing:
dynamic var = new Something();
dynamic try1 = enumerable;
try1.Add(var);
If all you know is object, then the non-generic interfaces are your friend:
IList list = (IList)foo;
list.Add(yourItem);
However! Not all enumerable items are lists, so not all can be added in this way. IList underpins a lot of data-binding, so is a good bet.
"try" is a reserved word in C#. Have you tried another variable name? or you could use #try if you need to use the "try" variable name.
You can iterate through the something collection and insert the elements into the object one.
If you have control over the relevant code, it is perhaps time to rethink the design a little. One often tries to use the most restricted interface possible that can still achieve the desired result. It seems in this case that IEnumerable cannot achieve that result, if one of the things you want to do is to add new items. Changing to a more appropriate type might be a better solution than having to cast the collection.
Use the Cast extension combined with a ToList.
IEnumerable ie1 = new ObservableCollection<string>();
ICollection<object> ic1 = (ICollection<object>)ie1.Cast<object>().ToList();
Or iterate through manually.
Dynamic typing works (Only one bad thing, that I have to link windows.csharp and system.core dlls, which are ~1MB, so it's not the best for a silverlight app, but It's the best solution, which I know now.)

Categories

Resources