IEnumerable items disappearing before being used using LINQ Where - c#

I'm fairly new to programming and I've been playing around with writing some random functions.
I wrote the below function which works loosely based on eratosthenes sieve. Initially though, I was having a problem with the updatedEntries IEnumerable.
The updatedEntites was sort of populating (something to do with deferred execution I gather - in debug mode the 'current' was null but the results view contained the relevant items) but when the RemoveWhere was applied to oddPrimesAndMultiples the items in updatedEntries disappeared even though I don't see why they should still be linked to the items in oddPrimesAndMultiples. (I could just be completely misunderstanding what's going on of course and the problem might be something else entirely!)
The problem doesn't arise if I change updatedEntries to a List rather than an IEnumerable and I've actually now rewritten that statement without using LINQ to (potentially?) make better use of the fact I'm using a SortedSet anyway...but I would still like to know why the issue arose in the first place!
Here is my code:
public static IEnumerable<int> QuickPrimes()
{
int firstPrime = 2;
int firstOddPrime = 3;
int currentValue = firstOddPrime;
int currentMinimumMultiple;
SortedSet<Tuple<int, int>> oddPrimesAndMultiples = new SortedSet<Tuple<int, int>>() { new Tuple<int, int> (firstOddPrime, firstOddPrime) };
IEnumerable<Tuple<int, int>> updatedEntries;
yield return firstPrime;
yield return firstOddPrime;
while (true)
{
currentMinimumMultiple = oddPrimesAndMultiples.First().Item1;
while (currentValue < currentMinimumMultiple)
{
yield return currentValue;
oddPrimesAndMultiples.Add(new Tuple<int, int> (currentValue * 3, currentValue));
currentValue += 2;
}
updatedEntries = oddPrimesAndMultiples.Where(tuple => tuple.Item1 == currentMinimumMultiple)
.Select(t => new Tuple<int, int>(t.Item1 + 2 * t.Item2, t.Item2));
oddPrimesAndMultiples.RemoveWhere(t => t.Item1 == currentMinimumMultiple);
oddPrimesAndMultiples.UnionWith(updatedEntries);
currentValue += 2;
}
}
and the main where I'm testing the function:
static void Main(string[] args)
{
foreach(int prime in Problems.QuickPrimes())
{
Console.WriteLine(prime);
if (prime > 20) return;
}
}
Many thanks in advance!

The trap is that updatedEntries is defined in one line, but actually executed later.
To bring it back to the basics, see this code snippet (from Linqpad):
var ints = new SortedSet<int>( new[] { 1,2,3,4,5,6,7,8,9,10});
var updatedEntries = ints.Where(i => i > 5); // No ToList()!
updatedEntries.Dump();
This shows 6, 7, 8, 9, 10.
ints.RemoveWhere(i => i > 7);
updatedEntries.Dump();
Now this shows 6, 7, because updatedEntries is re-executed.
ints.UnionWith(updatedEntries);
This adds 6, 7, while you expected it to add the first listing 6, 7, 8, 9, 10.
So when defining an IEnumerable you should always be aware of when it's actually executed. It always acts on the state of the program at that particular point.

Related

Error when trying to split list into smaller lists

The Error:
cannot convert from 'System.Collections.Generic.List<System.Collections.Generic.IEnumerable>' to 'System.Collections.Generic.List'
The Code:
for (int i = 0; i < Speed; i++)
{
Tasks[i] = Task.Run(() =>
{
var arr_ = arr.Chunk(Total / Speed).ToList();
Program.Check(arr_, Key, Current, Total, Node, Token);
}, Token);
}
Chunk(int) Method:
public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int chunkSize)
{
if (chunkSize <= 0)
{
throw new ArgumentException("chunkSize must be greater than 0.");
}
while (list.Any())
{
yield return list.Take(chunkSize);
list = list.Skip(chunkSize);
}
}
I've been stuck here for a while now without a solution, can any of you tell me what I'm doing wrong? The idea is to make go from a bigger list (arr) and convert it into smaller lists of Total / Speed size in a loop which then uses it for another function.
The way I understood how the yield return works is that every time you call it it's supposed to return the next iteration of the loop it is in, but I'm not so sure that's exactly how it functions or else it looks like it should work here.
Any help is appreciated, thanks
Chunk() returns an IEnumerable of IEnumerable<T>. You're trying to covert that to a List<T>, which can never work.
You take a list of stuff, break it into smaller lists of the stuff, and attempt to put all those smaller lists back into one big list.
If you really do intend to merge the sub-lists back into a single list, you can do it like this:
var _list = list.Chunk(3).SelectMany(i => i).ToList();
If you consume Chunk() properly based on its return type, it works just fine, e.g.
List<int> list = new List<int>() { 1, 2, 3, 4, 5, 6 };
foreach (var chunk in list.Chunk(3))
Console.WriteLine(string.Join(", ", chunk));
outputs
1, 2, 3
4, 5, 6

How can I choose the second highest value from a list in c#?

I have a list List<int> myList = new List<int>() { 10, 20, 8, 20, 9, 5, 20, 10 };, I want to choose the second highest value, which is in this case 10. I wrote this code and it works, but I wonder if there is something shorter and better.
List<int> myList = new List<int>() { 10, 20, 8, 20, 9, 5, 20, 10 };
myList = myList.Distinct().ToList();
var descendingOrder = myList.OrderByDescending(i => i);
var sec = descendingOrder.Skip(1).First();
You could just stop using intermediate variables and ToList()
var secondHighest =
myList
.Distinct()
.OrderByDescending(i => i);
.Skip(1)
.First();
This will work the same as your version, but only requires one statement instead of three.
I find it a lot easier to read code list this.
Each LINQ method call on it's own line, and no intermediate variables, especially ones that change (myList is reassigned, which makes it harder to comprehend).
Dave's suggestion to perform all the operations in one pipeline is very good indeed as it avoids:
unnecessary intermediate variables
eagerly creating new collection objects at intermediate steps
reduces clutter.
more readable i.e. it's easier to see what's going on
On the other hand, in terms of efficiency, it might be better to perform two passes over the source list instead of "sorting" the entire list only to take the second item.
var maximum = myList.Max();
var secondMaximum = myList.Where(x => x < maximum).Max();
I think I'd avoid LINQ for this one and just go for a standard "loop over every element, if current is higher than max, push current max to second place, current value to current max"
int sec = int.MinValue;
for(int i =0, m= int.MinValue; i <list.Length; i++)
if(list[i] > m){
sec = m;
m = list[i];
}
Your given logic distincts the values so it looks like 20 is not the second highest in your list even though there are three values that are 20. This is achieve here by the >. If I'd used >= then each 20 would roll the variables and it would behave as if non distincted
If you're interested in performance, test it over a list with a few million entries and pick the one that meets your appetite for readability vs speed
It's not LINQ-y, but it's O(N) and easy to read:
public static int TheSecondMax()
{
List<int> myList = new List<int>() { 10, 20, 8, 20, 9, 5, 20, 10 };
int max = int.MinValue;
int secondMax = int.MinValue;
foreach (var item in myList)
{
if (item > max)
{
max = item;
}
if (item > secondMax && item < max)
{
secondMax = item;
}
}
return secondMax;
}

How to observe Immutable List NotifyCollectionChanged?

As we know, we can observe collection changed using ObservableCollection.
That's s fine.
But how to handle ImmutableList changed?
For example:I have IObservable<ImmutableArray<int>> and sequence of this steam maybe:
First: 1, 2, 3, 4, 5
Second: 1, 2, 3, 4, 5, 6 <----(maybe some performance issue when binding to view.)
Third: 3, 4
Is there any elegant way (or some library) can convert IObservable<ImmutableArray<int>> to ObservableCollection<int> ?
And then we can observe ObservableCollection notification event:
First: add event 1, 2, 3, 4, 5
Second: add event 6, 7 <---- (That's cool!)
Third: remove event 1, 2, 5, 6
Very thanks.
This might be a bit of a naive approach, but is this the kind of thing you had in mind?
source
.Subscribe(ia =>
{
var ia2 = ia.ToArray();
var adds = ia2.Except(oc).ToArray();
var removes = oc.Except(ia2).ToArray();
foreach (var a in adds)
{
oc.Add(a);
}
foreach (var r in remove)
{
oc.Remove(r);
}
});
After some research, I have a answser for my own question.
The best solution should be Levenshtein distance.
The computational process roughly as follows:
Determine insert delete substitution costs. (insert=1, delete=1, substitution=2)
Calculate levenshtein distance and get matrix.
Backtrace matrix for shortest path and alignment. (it's very like A* pathfinding, setting backtrace point when generate matrix and get shorest path following backtrace)
Therefore this question could be closed.
I actually wrote a nuget package that does this automatically for you
https://github.com/Weingartner/ReactiveCompositeCollections
Part of the code uses diffs between immutable lists to generate ObservableCollection change events.
The code that does the diffing uses DiffLib
public static IObservable<List<DiffElement<T>>>
ChangesObservable<T>
( this ICompositeList<T> source
, IEqualityComparer<T>comparer = null
)
{
return source
.Items // IObservable<ImmutableList<T>>
.StartWith(ImmutableList<T>.Empty)
.Buffer(2, 1).Where(b => b.Count == 2)
.Select(b =>
{
var sections = Diff.CalculateSections(b[0], b[1], comparer);
var alignment = Diff.AlignElements
(b[0], b[1], sections, new BasicReplaceInsertDeleteDiffElementAligner<T>());
return alignment.ToList();
});
}
which in another method can be converted into an ObservableCollection
internal ReadOnlyObservableCollection
( ICompositeList<T> list
, System.Collections.ObjectModel.ObservableCollection<T> collection
, IEqualityComparer<T> eq
) : base(collection)
{
_List = list;
_Collection = collection;
_Disposable = list.ChangesObservable(eq)
.Subscribe(change =>
{
int i = 0;
foreach (var diff in change)
{
switch (diff.Operation)
{
case DiffOperation.Match:
break;
case DiffOperation.Insert:
_Collection.Insert(i, diff.ElementFromCollection2.Value);
break;
case DiffOperation.Delete:
_Collection.RemoveAt(i);
i--;
break;
case DiffOperation.Replace:
_Collection[i] = diff.ElementFromCollection2.Value;
break;
case DiffOperation.Modify:
_Collection[i] = diff.ElementFromCollection2.Value;
break;
default:
throw new ArgumentOutOfRangeException();
}
i++;
}
});
}

Get all possible distinct triples using LINQ

I have a List contains these values: {1, 2, 3, 4, 5, 6, 7}. And I want to be able to retrieve unique combination of three. The result should be like this:
{1,2,3}
{1,2,4}
{1,2,5}
{1,2,6}
{1,2,7}
{2,3,4}
{2,3,5}
{2,3,6}
{2,3,7}
{3,4,5}
{3,4,6}
{3,4,7}
{3,4,1}
{4,5,6}
{4,5,7}
{4,5,1}
{4,5,2}
{5,6,7}
{5,6,1}
{5,6,2}
{5,6,3}
I already have 2 for loops that able to do this:
for (int first = 0; first < test.Count - 2; first++)
{
int second = first + 1;
for (int offset = 1; offset < test.Count; offset++)
{
int third = (second + offset)%test.Count;
if(Math.Abs(first - third) < 2)
continue;
List<int> temp = new List<int>();
temp .Add(test[first]);
temp .Add(test[second]);
temp .Add(test[third]);
result.Add(temp );
}
}
But since I'm learning LINQ, I wonder if there is a smarter way to do this?
UPDATE: I used this question as the subject of a series of articles starting here; I'll go through two slightly different algorithms in that series. Thanks for the great question!
The two solutions posted so far are correct but inefficient for the cases where the numbers get large. The solutions posted so far use the algorithm: first enumerate all the possibilities:
{1, 1, 1 }
{1, 1, 2 },
{1, 1, 3 },
...
{7, 7, 7}
And while doing so, filter out any where the second is not larger than the first, and the third is not larger than the second. This performs 7 x 7 x 7 filtering operations, which is not that many, but if you were trying to get, say, permutations of ten elements from thirty, that's 30 x 30 x 30 x 30 x 30 x 30 x 30 x 30 x 30 x 30, which is rather a lot. You can do better than that.
I would solve this problem as follows. First, produce a data structure which is an efficient immutable set. Let me be very clear what an immutable set is, because you are likely not familiar with them. You normally think of a set as something you add items and remove items from. An immutable set has an Add operation but it does not change the set; it gives you back a new set which has the added item. The same for removal.
Here is an implementation of an immutable set where the elements are integers from 0 to 31:
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System;
// A super-cheap immutable set of integers from 0 to 31 ;
// just a convenient wrapper around bit operations on an int.
internal struct BitSet : IEnumerable<int>
{
public static BitSet Empty { get { return default(BitSet); } }
private readonly int bits;
private BitSet(int bits) { this.bits = bits; }
public bool Contains(int item)
{
Debug.Assert(0 <= item && item <= 31);
return (bits & (1 << item)) != 0;
}
public BitSet Add(int item)
{
Debug.Assert(0 <= item && item <= 31);
return new BitSet(this.bits | (1 << item));
}
public BitSet Remove(int item)
{
Debug.Assert(0 <= item && item <= 31);
return new BitSet(this.bits & ~(1 << item));
}
IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); }
public IEnumerator<int> GetEnumerator()
{
for(int item = 0; item < 32; ++item)
if (this.Contains(item))
yield return item;
}
public override string ToString()
{
return string.Join(",", this);
}
}
Read this code carefully to understand how it works. Again, always remember that adding an element to this set does not change the set. It produces a new set that has the added item.
OK, now that we've got that, let's consider a more efficient algorithm for producing your permutations.
We will solve the problem recursively. A recursive solution always has the same structure:
Can we solve a trivial problem? If so, solve it.
If not, break the problem down into a number of smaller problems and solve each one.
Let's start with the trivial problems.
Suppose you have a set and you wish to choose zero items from it. The answer is clear: there is only one possible permutation with zero elements, and that is the empty set.
Suppose you have a set with n elements in it and you want to choose more than n elements. Clearly there is no solution, not even the empty set.
We have now taken care of the cases where the set is empty or the number of elements chosen is more than the number of elements total, so we must be choosing at least one thing from a set that has at least one thing.
Of the possible permutations, some of them have the first element in them and some of them do not. Find all the ones that have the first element in them and yield them. We do this by recursing to choose one fewer elements on the set that is missing the first element.
The ones that do not have the first element in them we find by enumerating the permutations of the set without the first element.
static class Extensions
{
public static IEnumerable<BitSet> Choose(this BitSet b, int choose)
{
if (choose < 0) throw new InvalidOperationException();
if (choose == 0)
{
// Choosing zero elements from any set gives the empty set.
yield return BitSet.Empty;
}
else if (b.Count() >= choose)
{
// We are choosing at least one element from a set that has
// a first element. Get the first element, and the set
// lacking the first element.
int first = b.First();
BitSet rest = b.Remove(first);
// These are the permutations that contain the first element:
foreach(BitSet r in rest.Choose(choose-1))
yield return r.Add(first);
// These are the permutations that do not contain the first element:
foreach(BitSet r in rest.Choose(choose))
yield return r;
}
}
}
Now we can ask the question that you need the answer to:
class Program
{
static void Main()
{
BitSet b = BitSet.Empty.Add(1).Add(2).Add(3).Add(4).Add(5).Add(6).Add(7);
foreach(BitSet result in b.Choose(3))
Console.WriteLine(result);
}
}
And we're done. We have generated only as many sequences as we actually need. (Though we have done a lot of set operations to get there, but set operations are cheap.) The point here is that understanding how this algorithm works is extremely instructive. Recursive programming on immutable structures is a powerful tool that many professional programmers do not have in their toolbox.
You can do it like this:
var data = Enumerable.Range(1, 7);
var r = from a in data
from b in data
from c in data
where a < b && b < c
select new {a, b, c};
foreach (var x in r) {
Console.WriteLine("{0} {1} {2}", x.a, x.b, x.c);
}
Demo.
Edit: Thanks Eric Lippert for simplifying the answer!
var ints = new int[] { 1, 2, 3, 4, 5, 6, 7 };
var permutations = ints.SelectMany(a => ints.Where(b => (b > a)).
SelectMany(b => ints.Where(c => (c > b)).
Select(c => new { a = a, b = b, c = c })));

Thoughts on foreach with Enumerable.Range vs traditional for loop

In C# 3.0, I'm liking this style:
// Write the numbers 1 thru 7
foreach (int index in Enumerable.Range( 1, 7 ))
{
Console.WriteLine(index);
}
over the traditional for loop:
// Write the numbers 1 thru 7
for (int index = 1; index <= 7; index++)
{
Console.WriteLine( index );
}
Assuming 'n' is small so performance is not an issue, does anyone object to the new style over the traditional style?
I find the latter's "minimum-to-maximum" format a lot clearer than Range's "minimum-count" style for this purpose. Also, I don't think it's really a good practice to make a change like this from the norm that is not faster, not shorter, not more familiar, and not obviously clearer.
That said, I'm not against the idea in general. If you came up to me with syntax that looked something like foreach (int x from 1 to 8) then I'd probably agree that that would be an improvement over a for loop. However, Enumerable.Range is pretty clunky.
This is just for fun. (I'd just use the standard "for (int i = 1; i <= 10; i++)" loop format myself.)
foreach (int i in 1.To(10))
{
Console.WriteLine(i); // 1,2,3,4,5,6,7,8,9,10
}
// ...
public static IEnumerable<int> To(this int from, int to)
{
if (from < to)
{
while (from <= to)
{
yield return from++;
}
}
else
{
while (from >= to)
{
yield return from--;
}
}
}
You could also add a Step extension method too:
foreach (int i in 5.To(-9).Step(2))
{
Console.WriteLine(i); // 5,3,1,-1,-3,-5,-7,-9
}
// ...
public static IEnumerable<T> Step<T>(this IEnumerable<T> source, int step)
{
if (step == 0)
{
throw new ArgumentOutOfRangeException("step", "Param cannot be zero.");
}
return source.Where((x, i) => (i % step) == 0);
}
In C# 6.0 with the use of
using static System.Linq.Enumerable;
you can simplify it to
foreach (var index in Range(1, 7))
{
Console.WriteLine(index);
}
You can actually do this in C# (by providing To and Do as extension methods on int and IEnumerable<T> respectively):
1.To(7).Do(Console.WriteLine);
SmallTalk forever!
I kind of like the idea. It's very much like Python. Here's my version in a few lines:
static class Extensions
{
public static IEnumerable<int> To(this int from, int to, int step = 1) {
if (step == 0)
throw new ArgumentOutOfRangeException("step", "step cannot be zero");
// stop if next `step` reaches or oversteps `to`, in either +/- direction
while (!(step > 0 ^ from < to) && from != to) {
yield return from;
from += step;
}
}
}
It works like Python's:
0.To(4) → [ 0, 1, 2, 3 ]
4.To(0) → [ 4, 3, 2, 1 ]
4.To(4) → [ ]
7.To(-3, -3) → [ 7, 4, 1, -2 ]
I think the foreach + Enumerable.Range is less error prone (you have less control and less ways to do it wrong, like decreasing the index inside the body so the loop would never end, etc.)
The readability problem is about the Range function semantics, that can change from one language to another (e.g if given just one parameter will it begin from 0 or 1, or is the end included or excluded or is the second parameter a count instead a end value).
About the performance, I think the compiler should be smart enough to optimize both loops so they execute at a similar speed, even with large ranges (I suppose that Range does not create a collection, but of course an iterator).
I think Range is useful for working with some range inline:
var squares = Enumerable.Range(1, 7).Select(i => i * i);
You can each over. Requires converting to list but keeps things compact when that's what you want.
Enumerable.Range(1, 7).ToList().ForEach(i => Console.WriteLine(i));
But other than for something like this, I'd use traditional for loop.
It seems like quite a long winded approach to a problem that's already solved. There's a whole state machine behind the Enumerable.Range that isn't really needed.
The traditional format is fundamental to development and familiar to all. I don't really see any advantage to your new style.
I'd like to have the syntax of some other languages like Python, Haskell, etc.
// Write the numbers 1 thru 7
foreach (int index in [1..7])
{
Console.WriteLine(index);
}
Fortunatly, we got F# now :)
As for C#, I'll have to stick with the Enumerable.Range method.
#Luke:
I reimplemented your To() extension method and used the Enumerable.Range() method to do it.
This way it comes out a little shorter and uses as much infrastructure given to us by .NET as possible:
public static IEnumerable<int> To(this int from, int to)
{
return from < to
? Enumerable.Range(from, to - from + 1)
: Enumerable.Range(to, from - to + 1).Reverse();
}
How to use a new syntax today
Because of this question I tried out some things to come up with a nice syntax without waiting for first-class language support. Here's what I have:
using static Enumerizer;
// prints: 0 1 2 3 4 5 6 7 8 9
foreach (int i in 0 <= i < 10)
Console.Write(i + " ");
Not the difference between <= and <.
I also created a proof of concept repository on GitHub with even more functionality (reversed iteration, custom step size).
A minimal and very limited implementation of the above loop would look something like like this:
public readonly struct Enumerizer
{
public static readonly Enumerizer i = default;
public Enumerizer(int start) =>
Start = start;
public readonly int Start;
public static Enumerizer operator <(int start, Enumerizer _) =>
new Enumerizer(start);
public static Enumerizer operator >(int _, Enumerizer __) =>
throw new NotImplementedException();
public static IEnumerable<int> operator <=(Enumerizer start, int end)
{
for (int i = start.Start; i < end; i++)
yield return i;
}
public static IEnumerable<int> operator >=(Enumerizer _, int __) =>
throw new NotImplementedException();
}
There is no significant performance difference between traditional iteration and range iteration, as Nick Chapsas pointed out in his excellent YouTube video. Even the benchmark showed there is some difference in nanoseconds for the small number of iterations. As the loop gets quite big, the difference is almost gone.
Here is an elegant way of iterating in a range loop from his content:
private static void Test()
{
foreach (var i in 1..5)
{
}
}
Using this extension:
public static class Extension
{
public static CustomIntEnumerator GetEnumerator(this Range range)
{
return new CustomIntEnumerator(range);
}
public static CustomIntEnumerator GetEnumerator(this int number)
{
return new CustomIntEnumerator(new Range(0, number));
}
}
public ref struct CustomIntEnumerator
{
private int _current;
private readonly int _end;
public CustomIntEnumerator(Range range)
{
if (range.End.IsFromEnd)
{
throw new NotSupportedException();
}
_current = range.Start.Value - 1;
_end = range.End.Value;
}
public int Current => _current;
public bool MoveNext()
{
_current++;
return _current <= _end;
}
}
Benchmark result:
I loved this way of implementation. But, the biggest issue with this approach is its inability to use it in the async method.
I'm sure everybody has their personal preferences (many would prefer the later just because it is familiar over almost all programming languages), but I am like you and starting to like the foreach more and more, especially now that you can define a range.
In my opinion the Enumerable.Range() way is more declarative. New and unfamiliar to people? Certainly. But I think this declarative approach yields the same benefits as most other LINQ-related language features.
I imagine there could be scenarios where Enumerable.Range(index, count) is clearer when dealing with expressions for the parameters, especially if some of the values in that expression are altered within the loop. In the case of for the expression would be evaluated based on the state after the current iteration, whereas Enumerable.Range() is evaluated up-front.
Other than that, I'd agree that sticking with for would normally be better (more familiar/readable to more people... readable is a very important value in code that needs to be maintained).
I agree that in many (or even most cases) foreach is much more readable than a standard for-loop when simply iterating over a collection. However, your choice of using Enumerable.Range(index, count) isn't a strong example of the value of foreach over for.
For a simple range starting from 1, Enumerable.Range(index, count) looks quite readable. However, if the range starts with a different index, it becomes less readable because you have to properly perform index + count - 1 to determine what the last element will be. For example…
// Write the numbers 2 thru 8
foreach (var index in Enumerable.Range( 2, 7 ))
{
Console.WriteLine(index);
}
In this case, I much prefer the second example.
// Write the numbers 2 thru 8
for (int index = 2; index <= 8; index++)
{
Console.WriteLine(index);
}
Strictly speaking, you misuse enumeration.
Enumerator provides the means to access all the objects in a container one-by-one, but it does not guarantee the order.
It is OK to use enumeration to find the biggest number in an array. If you are using it to find, say, first non-zero element, you are relying on the implementation detail you should not know about. In your example, the order seems to be important to you.
Edit: I am wrong. As Luke pointed out (see comments) it is safe to rely on the order when enumerating an array in C#. This is different from, for example, using "for in" for enumerating an array in Javascript .
I do like the foreach + Enumerable.Range approach and use it sometimes.
// does anyone object to the new style over the traditional style?
foreach (var index in Enumerable.Range(1, 7))
I object to the var abuse in your proposal. I appreciate var, but, damn, just write int in this case! ;-)
Just throwing my hat into the ring.
I define this...
namespace CustomRanges {
public record IntRange(int From, int Thru, int step = 1) : IEnumerable<int> {
public IEnumerator<int> GetEnumerator() {
for (var i = From; i <= Thru; i += step)
yield return i;
}
IEnumerator IEnumerable.GetEnumerator()
=> GetEnumerator();
};
public static class Definitions {
public static IntRange FromTo(int from, int to, int step = 1)
=> new IntRange(from, to - 1, step);
public static IntRange FromThru(int from, int thru, int step = 1)
=> new IntRange(from, thru, step);
public static IntRange CountFrom(int from, int count)
=> new IntRange(from, from + count - 1);
public static IntRange Count(int count)
=> new IntRange(0, count);
// Add more to suit your needs. For instance, you could add in reversing ranges, etc.
}
}
Then anywhere I want to use it, I add this at the top of the file...
using static CustomRanges.Definitions;
And use it like this...
foreach(var index in FromTo(1, 4))
Debug.WriteLine(index);
// Prints 1, 2, 3
foreach(var index in FromThru(1, 4))
Debug.WriteLine(index);
// Prints 1, 2, 3, 4
foreach(var index in FromThru(2, 10, 2))
Debug.WriteLine(index);
// Prints 2, 4, 6, 8, 10
foreach(var index in CountFrom(7, 4))
Debug.WriteLine(index);
// Prints 7, 8, 9, 10
foreach(var index in Count(5))
Debug.WriteLine(index);
// Prints 0, 1, 2, 3, 4
foreach(var _ in Count(4))
Debug.WriteLine("A");
// Prints A, A, A, A
The nice thing about this approach is by the names, you know exactly if the end is included or not.

Categories

Resources