"Unzip" IEnumerable dynamically in C# or best alternative - c#

Lets assume you have a function that returns a lazily-enumerated object:
struct AnimalCount
{
int Chickens;
int Goats;
}
IEnumerable<AnimalCount> FarmsInEachPen()
{
....
yield new AnimalCount(x, y);
....
}
You also have two functions that consume two separate IEnumerables, for example:
ConsumeChicken(IEnumerable<int>);
ConsumeGoat(IEnumerable<int>);
How can you call ConsumeChicken and ConsumeGoat without a) converting FarmsInEachPen() ToList() beforehand because it might have two zillion records, b) no multi-threading.
Basically:
ConsumeChicken(FarmsInEachPen().Select(x => x.Chickens));
ConsumeGoats(FarmsInEachPen().Select(x => x.Goats));
But without forcing the double enumeration.
I can solve it with multithread, but it gets unnecessarily complicated with a buffer queue for each list.
So I'm looking for a way to split the AnimalCount enumerator into two int enumerators without fully evaluating AnimalCount. There is no problem running ConsumeGoat and ConsumeChicken together in lock-step.
I can feel the solution just out of my grasp but I'm not quite there. I'm thinking along the lines of a helper function that returns an IEnumerable being fed into ConsumeChicken and each time the iterator is used, it internally calls ConsumeGoat, thus executing the two functions in lock-step. Except, of course, I don't want to call ConsumeGoat more than once..

I don't think there is a way to do what you want, since ConsumeChickens(IEnumerable<int>) and ConsumeGoats(IEnumerable<int>) are being called sequentially, each of them enumerating a list separately - how do you expect that to work without two separate enumerations of the list?
Depending on the situation, a better solution is to have ConsumeChicken(int) and ConsumeGoat(int) methods (which each consume a single item), and call them in alternation. Like this:
foreach(var animal in animals)
{
ConsomeChicken(animal.Chickens);
ConsomeGoat(animal.Goats);
}
This will enumerate the animals collection only once.
Also, a note: depending on your LINQ-provider and what exactly it is you're trying to do, there may be better options. For example, if you're trying to get the total sum of both chickens and goats from a database using linq-to-sql or linq-to-entities, the following query..
from a in animals
group a by 0 into g
select new
{
TotalChickens = g.Sum(x => x.Chickens),
TotalGoats = g.Sum(x => x.Goats)
}
will result in a single query, and do the summation on the database-end, which is greatly preferable to pulling the entire table over and doing the summation on the client end.

The way you have posed your problem, there is no way to do this. IEnumerable<T> is a pull enumerable - that is, you can GetEnumerator to the front of the sequence and then repeatedly ask "Give me the next item" (MoveNext/Current). You can't, on one thread, have two different things pulling from the animals.Select(a => a.Chickens) and animals.Select(a => a.Goats) at the same time. You would have to do one then the other (which would require materializing the second).
The suggestion BlueRaja made is one way to change the problem slightly. I would suggest going that route.
The other alternative is to utilize IObservable<T> from Microsoft's reactive extensions (Rx), a push enumerable. I won't go into the details of how you would do that, but it's something you could look into.
Edit:
The above is assuming that ConsumeChickens and ConsumeGoats are both returning void or are at least not returning IEnumerable<T> themselves - which seems like an obvious assumption. I'd appreciate it if the lame downvoter would actually comment.

Actually simples way to achieve what you what is convert FarmsInEachPen return value to push collection or IObservable and use ReactiveExtensions for working with it
var observable = new Subject<Animals>()
observable.Do(x=> DoSomethingWithChicken(x. Chickens))
observable.Do(x=> DoSomethingWithGoat(x.Goats))
foreach(var item in FarmsInEachPen())
{
observable.OnNext(item)
}

I figured it out, thanks in large part due to the path that #Lee put me on.
You need to share a single enumerator between the two zips, and use an adapter function to project the correct element into the sequence.
private static IEnumerable<object> ConsumeChickens(IEnumerable<int> xList)
{
foreach (var x in xList)
{
Console.WriteLine("X: " + x);
yield return null;
}
}
private static IEnumerable<object> ConsumeGoats(IEnumerable<int> yList)
{
foreach (var y in yList)
{
Console.WriteLine("Y: " + y);
yield return null;
}
}
private static IEnumerable<int> SelectHelper(IEnumerator<AnimalCount> enumerator, int i)
{
bool c = i != 0 || enumerator.MoveNext();
while (c)
{
if (i == 0)
{
yield return enumerator.Current.Chickens;
c = enumerator.MoveNext();
}
else
{
yield return enumerator.Current.Goats;
}
}
}
private static void Main(string[] args)
{
var enumerator = GetAnimals().GetEnumerator();
var chickensList = ConsumeChickens(SelectHelper(enumerator, 0));
var goatsList = ConsumeGoats(SelectHelper(enumerator, 1));
var temp = chickensList.Zip(goatsList, (i, i1) => (object) null);
temp.ToList();
Console.WriteLine("Total iterations: " + iterations);
}

Related

Why use the yield keyword, when I could just use an ordinary IEnumerable?

Given this code:
IEnumerable<object> FilteredList()
{
foreach( object item in FullList )
{
if( IsItemInPartialList( item ) )
yield return item;
}
}
Why should I not just code it this way?:
IEnumerable<object> FilteredList()
{
var list = new List<object>();
foreach( object item in FullList )
{
if( IsItemInPartialList( item ) )
list.Add(item);
}
return list;
}
I sort of understand what the yield keyword does. It tells the compiler to build a certain kind of thing (an iterator). But why use it? Apart from it being slightly less code, what's it do for me?
Using yield makes the collection lazy.
Let's say you just need the first five items. Your way, I have to loop through the entire list to get the first five items. With yield, I only loop through the first five items.
The benefit of iterator blocks is that they work lazily. So you can write a filtering method like this:
public static IEnumerable<T> Where<T>(this IEnumerable<T> source,
Func<T, bool> predicate)
{
foreach (var item in source)
{
if (predicate(item))
{
yield return item;
}
}
}
That will allow you to filter a stream as long as you like, never buffering more than a single item at a time. If you only need the first value from the returned sequence, for example, why would you want to copy everything into a new list?
As another example, you can easily create an infinite stream using iterator blocks. For example, here's a sequence of random numbers:
public static IEnumerable<int> RandomSequence(int minInclusive, int maxExclusive)
{
Random rng = new Random();
while (true)
{
yield return rng.Next(minInclusive, maxExclusive);
}
}
How would you store an infinite sequence in a list?
My Edulinq blog series gives a sample implementation of LINQ to Objects which makes heavy use of iterator blocks. LINQ is fundamentally lazy where it can be - and putting things in a list simply doesn't work that way.
With the "list" code, you have to process the full list before you can pass it on to the next step. The "yield" version passes the processed item immediately to the next step. If that "next step" contains a ".Take(10)" then the "yield" version will only process the first 10 items and forget about the rest. The "list" code would have processed everything.
This means that you see the most difference when you need to do a lot of processing and/or have long lists of items to process.
You can use yield to return items that aren't in a list. Here's a little sample that could iterate infinitely through a list until canceled.
public IEnumerable<int> GetNextNumber()
{
while (true)
{
for (int i = 0; i < 10; i++)
{
yield return i;
}
}
}
public bool Canceled { get; set; }
public void StartCounting()
{
foreach (var number in GetNextNumber())
{
if (this.Canceled) break;
Console.WriteLine(number);
}
}
This writes
0
1
2
3
4
5
6
7
8
9
0
1
2
3
4
...etc. to the console until canceled.
object jamesItem = null;
foreach(var item in FilteredList())
{
if (item.Name == "James")
{
jamesItem = item;
break;
}
}
return jamesItem;
When the above code is used to loop through FilteredList() and assuming item.Name == "James" will be satisfied on 2nd item in the list, the method using yield will yield twice. This is a lazy behavior.
Where as the method using list will add all the n objects to the list and pass the complete list to the calling method.
This is exactly a use case where difference between IEnumerable and IList can be highlighted.
The best real world example I've seen for the use of yield would be to calculate a Fibonacci sequence.
Consider the following code:
class Program
{
static void Main(string[] args)
{
Console.WriteLine(string.Join(", ", Fibonacci().Take(10)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(15).Take(1)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(10).Take(5)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(100).Take(1)));
Console.ReadKey();
}
private static IEnumerable<long> Fibonacci()
{
long a = 0;
long b = 1;
while (true)
{
long temp = a;
a = b;
yield return a;
b = temp + b;
}
}
}
This will return:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55
987
89, 144, 233, 377, 610
1298777728820984005
This is nice because it allows you to calculate out an infinite series quickly and easily, giving you the ability to use the Linq extensions and query only what you need.
why use [yield]? Apart from it being slightly less code, what's it do for me?
Sometimes it is useful, sometimes not. If the entire set of data must be examined and returned then there is not going to be any benefit in using yield because all it did was introduce overhead.
When yield really shines is when only a partial set is returned. I think the best example is sorting. Assume you have a list of objects containing a date and a dollar amount from this year and you would like to see the first handful (5) records of the year.
In order to accomplish this, the list must be sorted ascending by date, and then have the first 5 taken. If this was done without yield, the entire list would have to be sorted, right up to making sure the last two dates were in order.
However, with yield, once the first 5 items have been established the sorting stops and the results are available. This can save a large amount of time.
The yield return statement allows you to return only one item at a time. You are collecting all the items in a list and again returning that list, which is a memory overhead.

How to handle an "infinite" IEnumerable?

A trivial example of an "infinite" IEnumerable would be
IEnumerable<int> Numbers() {
int i=0;
while(true) {
yield return unchecked(i++);
}
}
I know, that
foreach(int i in Numbers().Take(10)) {
Console.WriteLine(i);
}
and
var q = Numbers();
foreach(int i in q.Take(10)) {
Console.WriteLine(i);
}
both work fine (and print out the number 0-9).
But are there any pitfalls when copying or handling expressions like q? Can I rely on the fact, that they are always evaluated "lazy"? Is there any danger to produce an infinite loop?
As long as you only call lazy, un-buffered methods you should be fine. So Skip, Take, Select, etc are fine. However, Min, Count, OrderBy etc would go crazy.
It can work, but you need to be cautious. Or inject a Take(somethingFinite) as a safety measure (or some other custom extension method that throws an exception after too much data).
For example:
public static IEnumerable<T> SanityCheck<T>(this IEnumerable<T> data, int max) {
int i = 0;
foreach(T item in data) {
if(++i >= max) throw new InvalidOperationException();
yield return item;
}
}
Yes, you are guaranteed that the code above will be executed lazily. While it looks (in your code) like you'd loop forever, your code actually produces something like this:
IEnumerable<int> Numbers()
{
return new PrivateNumbersEnumerable();
}
private class PrivateNumbersEnumerable : IEnumerable<int>
{
public IEnumerator<int> GetEnumerator()
{
return new PrivateNumbersEnumerator();
}
}
private class PrivateNumbersEnumerator : IEnumerator<int>
{
private int i;
public bool MoveNext() { i++; return true; }
public int Current
{
get { return i; }
}
}
(This obviously isn't exactly what will be generated, since this is pretty specific to your code, but it's nonetheless similar and should show you why it's going to be lazily evaluated).
You would have to avoid any greedy functions that attempt to read to end. This would include Enumerable extensions like: Count, ToArray/ToList, and aggregates Avg/Min/Max, etc.
There's nothing wrong with infinite lazy lists, but you must make conscious decisions about how to handle them.
Use Take to limit the impact of an endless loop by setting an upper bound even if you don't need them all.
Yes, your code will always work without infinite looping. Someone might come along though later and mess things up. Suppose they want to do:
var q = Numbers().ToList();
Then, you're hosed! Many "aggregate" functions will kill you, like Max().
If it wasn't lazy evaluation, your first example won't work as expected in the first place.

Design pattern for aggregating lazy lists

I'm writing a program as follows:
Find all files with the correct extension in a given directory
Foreach, find all occurrences of a given string in those files
Print each line
I'd like to write this in a functional way, as a series of generator functions (things that call yield return and only return one item at a time lazily-loaded), so my code would read like this:
IEnumerable<string> allFiles = GetAllFiles();
IEnumerable<string> matchingFiles = GetMatches( "*.txt", allFiles );
IEnumerable<string> contents = GetFileContents( matchingFiles );
IEnumerable<string> matchingLines = GetMatchingLines( contents );
foreach( var lineText in matchingLines )
Console.WriteLine( "Found: " + lineText );
This is all fine, but what I'd also like to do is print some statistics at the end. Something like this:
Found 233 matches in 150 matching files. Scanned 3,297 total files in 5.72s
The problem is, writing the code in a 'pure functional' style like above, each item is lazily loaded.
You only know how many files match in total until the final foreach loop completes, and because only one item is ever yielded at a time, the code doesn't have any place to keep track of how many things it's found previously. If you invoke LINQ's matchingLines.Count() method, it will re-enumerate the collection!
I can think of many ways to solve this problem, but all of them seem to be somewhat ugly. It strikes me as something that people are bound to have done before, and I'm sure there'll be a nice design pattern which shows a best practice way of doing this.
Any ideas? Cheers
In a similar vein to other answers, but taking a slightly more generic approach ...
... why not create a Decorator class that can wrap an existing IEnumerable implementation and calculate the statistic as it passes other items through.
Here's a Counter class I just threw together - but you could create variations for other kinds of aggregation too.
public class Counter<T> : IEnumerable<T>
{
public int Count { get; private set; }
public Counter(IEnumerable<T> source)
{
mSource = source;
Count = 0;
}
public IEnumerator<T> GetEnumerator()
{
foreach (var T in mSource)
{
Count++;
yield return T;
}
}
IEnumerator IEnumerable.GetEnumerator()
{
foreach (var T in mSource)
{
Count++;
yield return T;
}
}
private IEnumerable<T> mSource;
}
You could create three instances of Counter:
One to wrap GetAllFiles() counting the total number of files;
One to wrap GetMatches() counting the number of matching files; and
One to wrap GetMatchingLines() counting the number of matching lines.
The key with this approach is that you're not layering multiple responsibilities onto your existing classes/methods - the GetMatchingLines() method only handles the matching, you're not asking it to track stats as well.
Clarification in response to a comment by Mitcham:
The final code would look something like this:
var files = new Counter<string>( GetAllFiles());
var matchingFiles = new Counter<string>(GetMatches( "*.txt", files ));
var contents = GetFileContents( matchingFiles );
var linesFound = new Counter<string>(GetMatchingLines( contents ));
foreach( var lineText in linesFound )
Console.WriteLine( "Found: " + lineText );
string message
= String.Format(
"Found {0} matches in {1} matching files. Scanned {2} files",
linesFound.Count,
matchingFiles.Count,
files.Count);
Console.WriteLine(message);
Note that this is still a functional approach - the variables used are immutable (more like bindings than variables), and the overall function has no side-effects.
I would say that you need to encapsulate the process into a 'Matcher' class in which your methods capture statistics as they progress.
public class Matcher
{
private int totalFileCount;
private int matchedCount;
private DateTime start;
private int lineCount;
private DateTime stop;
public IEnumerable<string> Match()
{
return GetMatchedFiles();
System.Console.WriteLine(string.Format(
"Found {0} matches in {1} matching files." +
" {2} total files scanned in {3}.",
lineCount, matchedCount,
totalFileCount, (stop-start).ToString());
}
private IEnumerable<File> GetMatchedFiles(string pattern)
{
foreach(File file in SomeFileRetrievalMethod())
{
totalFileCount++;
if (MatchPattern(pattern,file.FileName))
{
matchedCount++;
yield return file;
}
}
}
}
I'll stop there since I'm supposed to be coding work stuff, but the general idea is there. The entire point of 'pure' functional program is to not have side effects, and this type of statics calculation is a side effect.
I can think of two ideas
Pass in a context object and return (string + context) from your enumerators - the purely functional solution
use thread local storage for you statistics (CallContext), you can be fancy and support a stack of contexts. so you would have code like this.
using (var stats = DirStats.Create())
{
IEnumerable<string> allFiles = GetAllFiles();
IEnumerable<string> matchingFiles = GetMatches( "*.txt", allFiles );
IEnumerable<string> contents = GetFileContents( matchingFiles );
stats.Print()
IEnumerable<string> matchingLines = GetMatchingLines( contents );
stats.Print();
}
If you're happy to turn your code upside down, you might be interested in Push LINQ. The basic idea is to reverse the "pull" model of IEnumerable<T> and turn it into a "push" model with observers - each part of the pipeline effectively pushes its data past any number of observers (using event handlers) which typically form new parts of the pipeline. This gives a really easy way to hook up multiple aggregates to the same data.
See this blog entry for some more details. I gave a talk on it in London a while ago - my page of talks has a few links for sample code, the slide deck, video etc.
It's a fun little project, but it does take a bit of getting your head around.
I took Bevan's code and refactored it around until I was content. Fun stuff.
public class Counter
{
public int Count { get; set; }
}
public static class CounterExtensions
{
public static IEnumerable<T> ObserveCount<T>
(this IEnumerable<T> source, Counter count)
{
foreach (T t in source)
{
count.Count++;
yield return t;
}
}
public static IEnumerable<T> ObserveCount<T>
(this IEnumerable<T> source, IList<Counter> counters)
{
Counter c = new Counter();
counters.Add(c);
return source.ObserveCount(c);
}
}
public static class CounterTest
{
public static void Test1()
{
IList<Counter> counters = new List<Counter>();
//
IEnumerable<int> step1 =
Enumerable.Range(0, 100).ObserveCount(counters);
//
IEnumerable<int> step2 =
step1.Where(i => i % 10 == 0).ObserveCount(counters);
//
IEnumerable<int> step3 =
step2.Take(3).ObserveCount(counters);
//
step3.ToList();
foreach (Counter c in counters)
{
Console.WriteLine(c.Count);
}
}
}
Output as expected: 21, 3, 3
Assuming those functions are your own, the only thing I can think of is the Visitor pattern, passing in an abstract visitor function that calls you back when each thing happens. For example: pass an ILineVisitor into GetFileContents (which I'm assuming breaks up the file into lines). ILineVisitor would have a method like OnVisitLine(String line), you could then implement the ILineVisitor and make it keep the appropriate stats. Rinse and repeat with a ILineMatchVisitor, IFileVisitor etc. Or you could use a single IVisitor with an OnVisit() method which has a different semantic in each case.
Your functions would each need to take a Visitor, and call it's OnVisit() at the appropriate time, which may seem annoying, but at least the visitor could be used to do lots of interesting things, other than just what you're doing here. In fact you could actually avoid writing GetMatchingLines by passing a visitor that checks for the match in OnVisitLine(String line) into GetFileContents.
Is this one of the ugly things you'd already considered?

Why is .ForEach() on IList<T> and not on IEnumerable<T>? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is there not a ForEach extension method on the IEnumerable interface?
I've noticed when writing LINQ-y code that .ForEach() is a nice idiom to use. For example, here is a piece of code that takes the following inputs, and produces these outputs:
{ "One" } => "One"
{ "One", "Two" } => "One, Two"
{ "One", "Two", "Three", "Four" } => "One, Two, Three and Four";
And the code:
private string InsertCommasAttempt(IEnumerable<string> words)
{
List<string> wordList = words.ToList();
StringBuilder sb = new StringBuilder();
var wordsAndSeparators = wordList.Select((string word, int pos) =>
{
if (pos == 0) return new { Word = word, Leading = string.Empty };
if (pos == wordList.Count - 1) return new { Word = word, Leading = " and " };
return new { Word = word, Leading = ", " };
});
wordsAndSeparators.ToList().ForEach(v => sb.Append(v.Leading).Append(v.Word));
return sb.ToString();
}
Note the interjected .ToList() before the .ForEach() on the second to last line.
Why is it that .ForEach() isn't available as an extension method on IEnumerable<T>? With an example like this, it just seems weird.
Because ForEach(Action) existed before IEnumerable<T> existed.
Since it was not added with the other extension methods, one can assume that the C# designers felt it was a bad design and prefer the foreach construct.
Edit:
If you want you can create your own extension method, it won't override the one for a List<T> but it will work for any other class which implements IEnumerable<T>.
public static class IEnumerableExtensions
{
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{
foreach (T item in source)
action(item);
}
}
According to Eric Lippert, this is mostly for philosophical reasons. You should read the whole post, but here's the gist as far as I'm concerned:
I am philosophically opposed to
providing such a method, for two
reasons.
The first reason is that doing so
violates the functional programming
principles that all the other sequence
operators are based upon. Clearly the
sole purpose of a call to this method
is to cause side effects.
The purpose of an expression is to
compute a value, not to cause a side
effect. The purpose of a statement is
to cause a side effect. The call site
of this thing would look an awful lot
like an expression (though,
admittedly, since the method is
void-returning, the expression could
only be used in a “statement
expression” context.)
It does not sit well with me to make
the one and only sequence operator
that is only useful for its side
effects.
The second reason is that doing so
adds zero new representational power
to the language.
Because ForEach() on an IEnumerable is just a normal for each loop like this:
for each T item in MyEnumerable
{
// Action<T> goes here
}
ForEach isn't on IList it's on List. You were using the concrete List in your example.
I am just guessing here , but putting foreach on IEnumerable would make operations on it to have side effects . None of the "available" extension methods cause side effects , putting an imperative method like foreach on there would muddy the api I guess . Also, foreach would initialize the lazy collection .
Personally I've been fending off the temptation to just add my own , just to keep side effect free functions separate from ones with side effects.
ForEach is implemented in the concrete class List<T>
Just a guess, but List can iterate over its items without creating an enumerator:
public void ForEach(Action<T> action)
{
if (action == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.match);
}
for (int i = 0; i < this._size; i++)
{
action(this._items[i]);
}
}
This can lead to better performance. With IEnumerable, you don't have the option to use an ordinary for-loop.
LINQ follows the pull-model and all its (extension) methods should return IEnumerable<T>, except for ToList(). The ToList() is there to end the pull-chain.
ForEach() is from the push-model world.
You can still write your own extension method to do this, as pointed out by Samuel.
I honestly don't know for sure why the .ForEach(Action) isn't included on IEnumerable but, right, wrong or indifferent, that's the way it is...
I DID however want to highlight the performance issue mentioned in other comments. There is a performance hit based on how you loop over a collection. It is relatively minor but nevertheless, it certainly exists. Here is an incredibly fast and sloppy code snippet to show the relations... only takes a minute or so to run through.
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Start Loop timing test: loading collection...");
List<int> l = new List<int>();
for (long i = 0; i < 60000000; i++)
{
l.Add(Convert.ToInt32(i));
}
Console.WriteLine("Collection loaded with {0} elements: start timings",l.Count());
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("foreach loop test starting...");
DateTime start = DateTime.Now;
//l.ForEach(x => l[x].ToString());
foreach (int x in l)
l[x].ToString();
Console.WriteLine("foreach Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("List.ForEach(x => x.action) loop test starting...");
start = DateTime.Now;
l.ForEach(x => l[x].ToString());
Console.WriteLine("List.ForEach(x => x.action) Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("for loop test starting...");
start = DateTime.Now;
int count = l.Count();
for (int i = 0; i < count; i++)
{
l[i].ToString();
}
Console.WriteLine("for Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("\n\nPress Enter to continue...");
Console.ReadLine();
}
Don't get hung up on this too much though. Performance is the currency of application design but unless your application is experiencing an actual performance hit that is causing usability problems, focus on coding for maintainability and reuse since time is the currency of real life business projects...
It's called "Select" on IEnumerable<T>
I am enlightened, thank you.

Is yield useful outside of LINQ?

When ever I think I can use the yield keyword, I take a step back and look at how it will impact my project. I always end up returning a collection instead of yeilding because I feel the overhead of maintaining the state of the yeilding method doesn't buy me much. In almost all cases where I am returning a collection I feel that 90% of the time, the calling method will be iterating over all elements in the collection, or will be seeking a series of elements throughout the entire collection.
I do understand its usefulness in linq, but I feel that only the linq team is writing such complex queriable objects that yield is useful.
Has anyone written anything like or not like linq where yield was useful?
Note that with yield, you are iterating over the collection once, but when you build a list, you'll be iterating over it twice.
Take, for example, a filter iterator:
IEnumerator<T> Filter(this IEnumerator<T> coll, Func<T, bool> func)
{
foreach(T t in coll)
if (func(t)) yield return t;
}
Now, you can chain this:
MyColl.Filter(x=> x.id > 100).Filter(x => x.val < 200).Filter (etc)
You method would be creating (and tossing) three lists. My method iterates over it just once.
Also, when you return a collection, you are forcing a particular implementation on you users. An iterator is more generic.
I do understand its usefulness in linq, but I feel that only the linq team is writing such complex queriable objects that yield is useful.
Yield was useful as soon as it got implemented in .NET 2.0, which was long before anyone ever thought of LINQ.
Why would I write this function:
IList<string> LoadStuff() {
var ret = new List<string>();
foreach(var x in SomeExternalResource)
ret.Add(x);
return ret;
}
When I can use yield, and save the effort and complexity of creating a temporary list for no good reason:
IEnumerable<string> LoadStuff() {
foreach(var x in SomeExternalResource)
yield return x;
}
It can also have huge performance advantages. If your code only happens to use the first 5 elements of the collection, then using yield will often avoid the effort of loading anything past that point. If you build a collection then return it, you waste a ton of time and space loading things you'll never need.
I could go on and on....
I recently had to make a representation of mathematical expressions in the form of an Expression class. When evaluating the expression I have to traverse the tree structure with a post-order treewalk. To achieve this I implemented IEnumerable<T> like this:
public IEnumerator<Expression<T>> GetEnumerator()
{
if (IsLeaf)
{
yield return this;
}
else
{
foreach (Expression<T> expr in LeftExpression)
{
yield return expr;
}
foreach (Expression<T> expr in RightExpression)
{
yield return expr;
}
yield return this;
}
}
Then I can simply use a foreach to traverse the expression. You can also add a Property to change the traversal algorithm as needed.
At a previous company, I found myself writing loops like this:
for (DateTime date = schedule.StartDate; date <= schedule.EndDate;
date = date.AddDays(1))
With a very simple iterator block, I was able to change this to:
foreach (DateTime date in schedule.DateRange)
It made the code a lot easier to read, IMO.
yield was developed for C#2 (before Linq in C#3).
We used it heavily in a large enterprise C#2 web application when dealing with data access and heavily repeated calculations.
Collections are great any time you have a few elements that you're going to hit multiple times.
However in lots of data access scenarios you have large numbers of elements that you don't necessarily need to pass round in a great big collection.
This is essentially what the SqlDataReader does - it's a forward only custom enumerator.
What yield lets you do is quickly and with minimal code write your own custom enumerators.
Everything yield does could be done in C#1 - it just took reams of code to do it.
Linq really maximises the value of the yield behaviour, but it certainly isn't the only application.
Whenever your function returns IEnumerable you should use "yielding". Not in .Net > 3.0 only.
.Net 2.0 example:
public static class FuncUtils
{
public delegate T Func<T>();
public delegate T Func<A0, T>(A0 arg0);
public delegate T Func<A0, A1, T>(A0 arg0, A1 arg1);
...
public static IEnumerable<T> Filter<T>(IEnumerable<T> e, Func<T, bool> filterFunc)
{
foreach (T el in e)
if (filterFunc(el))
yield return el;
}
public static IEnumerable<R> Map<T, R>(IEnumerable<T> e, Func<T, R> mapFunc)
{
foreach (T el in e)
yield return mapFunc(el);
}
...
I'm not sure about C#'s implementation of yield(), but on dynamic languages, it's far more efficient than creating the whole collection. on many cases, it makes it easy to work with datasets much bigger than RAM.
I am a huge Yield fan in C#. This is especially true in large homegrown frameworks where often methods or properties return List that is a sub-set of another IEnumerable. The benefits that I see are:
the return value of a method that uses yield is immutable
you are only iterating over the list once
it a late or lazy execution variable, meaning the code to return the values are not executed until needed (though this can bite you if you dont know what your doing)
of the source list changes, you dont have to call to get another IEnumerable, you just iterate over IEnumeable again
many more
One other HUGE benefit of yield is when your method potentially will return millions of values. So many that there is the potential of running out of memory just building the List before the method can even return it. With yield, the method can just create and return millions of values, and as long the caller also doesnt store every value. So its good for large scale data processing / aggregating operations
Personnally, I haven't found I'm using yield in my normal day-to-day programming. However, I've recently started playing with the Robotics Studio samples and found that yield is used extensively there, so I also see it being used in conjunction with the CCR (Concurrency and Coordination Runtime) where you have async and concurrency issues.
Anyway, still trying to get my head around it as well.
Yield is useful because it saves you space. Most optimizations in programming makes a trade off between space (disk, memory, networking) and processing. Yield as a programming construct allows you to iterate over a collection many times in sequence without needing a separate copy of the collection for each iteration.
consider this example:
static IEnumerable<Person> GetAllPeople()
{
return new List<Person>()
{
new Person() { Name = "George", Surname = "Bush", City = "Washington" },
new Person() { Name = "Abraham", Surname = "Lincoln", City = "Washington" },
new Person() { Name = "Joe", Surname = "Average", City = "New York" }
};
}
static IEnumerable<Person> GetPeopleFrom(this IEnumerable<Person> people, string where)
{
foreach (var person in people)
{
if (person.City == where) yield return person;
}
yield break;
}
static IEnumerable<Person> GetPeopleWithInitial(this IEnumerable<Person> people, string initial)
{
foreach (var person in people)
{
if (person.Name.StartsWith(initial)) yield return person;
}
yield break;
}
static void Main(string[] args)
{
var people = GetAllPeople();
foreach (var p in people.GetPeopleFrom("Washington"))
{
// do something with washingtonites
}
foreach (var p in people.GetPeopleWithInitial("G"))
{
// do something with people with initial G
}
foreach (var p in people.GetPeopleWithInitial("P").GetPeopleFrom("New York"))
{
// etc
}
}
(Obviously you are not required to use yield with extension methods, it just creates a powerful paradigm to think about data.)
As you can see, if you have a lot of these "filter" methods (but it can be any kind of method that does some work on a list of people) you can chain many of them together without requiring extra storage space for each step. This is one way of raising the programming language (C#) up to express your solutions better.
The first side-effect of yield is that it delays execution of the filtering logic until you actually require it. If you therefore create a variable of type IEnumerable<> (with yields) but never iterate through it, you never execute the logic or consume the space which is a powerful and free optimization.
The other side-effect is that yield operates on the lowest common collection interface (IEnumerable<>) which enables the creation of library-like code with wide applicability.
Note that yield allows you to do things in a "lazy" way. By lazy, I mean that the evaluation of the next element in the IEnumberable is not done until the element is actually requested. This allows you the power to do a couple of different things. One is that you could yield an infinitely long list without the need to actually make infinite calculations. Second, you could return an enumeration of function applications. The functions would only be applied when you iterate through the list.
I've used yeild in non-linq code things like this (assuming functions do not live in same class):
public IEnumerable<string> GetData()
{
foreach(String name in _someInternalDataCollection)
{
yield return name;
}
}
...
public void DoSomething()
{
foreach(String value in GetData())
{
//... Do something with value that doesn't modify _someInternalDataCollection
}
}
You have to be careful not to inadvertently modify the collection that your GetData() function is iterating over though, or it will throw an exception.
Yield is very useful in general. It's in ruby among other languages that support functional style programming, so its like it's tied to linq. It's more the other way around, that linq is functional in style, so it uses yield.
I had a problem where my program was using a lot of cpu in some background tasks. What I really wanted was to still be able to write functions like normal, so that I could easily read them (i.e. the whole threading vs. event based argument). And still be able to break the functions up if they took too much cpu. Yield is perfect for this. I wrote a blog post about this and the source is available for all to grok :)
The System.Linq IEnumerable extensions are great, but sometime you want more. For example, consider the following extension:
public static class CollectionSampling
{
public static IEnumerable<T> Sample<T>(this IEnumerable<T> coll, int max)
{
var rand = new Random();
using (var enumerator = coll.GetEnumerator());
{
while (enumerator.MoveNext())
{
yield return enumerator.Current;
int currentSample = rand.Next(max);
for (int i = 1; i <= currentSample; i++)
enumerator.MoveNext();
}
}
}
}
Another interesting advantage of yielding is that the caller cannot cast the return value to the original collection type and modify your internal collection

Categories

Resources