C# MongoDB driver only returning 100 results - c#

I am writing a mailing label, and need to print a label for each document.
I have 829 documents on the Collection, but when I retrieve them, I only get 100 documents.
I have this LINQ code:
IMongoCollection Pessoa;
Pessoa = database.GetCollection<Pessoa>(collectionName);
return Pessoa.AsQueryable().ToList();
How to retrieve ALL the documents?

I have 829 documents on the Collection, but when I retrieve them, I only get 100 documents.
I could reproduce the issue on my side, using AsQueryable extension method on IMongoCollection collection.AsQueryable() to find documents in a collection, which always return 100 documents even though I changed Items per page setting to Unlimited on Azure portal.
Setting:
Test code:
Count documents in query explorer:
To query all the documents in a collection, as you mentioned in comment, you could try to call Find method with an empty filter.

You're probably being limited by the default cursor BatchSize.
You can modify this behaviour passing an AggregateOptions object to the AsQueryable extension and setting the BatchSize property to a large enough value.
public static IMongoQueryable<TDocument> AsQueryable<TDocument>(this IMongoCollection<TDocument> collection, AggregateOptions aggregateOptions = null)

I found the question useful, so I wrote a convenient IEnumerator:
private sealed class MongoCollectionEnumerator : IEnumerator<T> {
private IMongoCollection<T> _collection;
private IAsyncCursor<T> _cursor; // outer enumerator
private IEnumerator<T> _currentBatchEnumerator; // inner enumerator
public MongoCollectionEnumerator(IMongoCollection<T> collection) {
_collection = collection;
InternalInit();
}
#region interface implementation
T IEnumerator<T>.Current {
get {
return _currentBatchEnumerator.Current;
}
}
object IEnumerator.Current {
get {
return ThisAsTypedIEnumerator.Current;
}
}
bool IEnumerator.MoveNext() {
if (_currentBatchEnumerator != null) {
if (_currentBatchEnumerator.MoveNext()) {
return true;
}
}
// inner not initialized or already at end
if (_cursor.MoveNext()) {
// advance the outer and defer back to the inner by recursing
_currentBatchEnumerator = _cursor.Current.GetEnumerator();
return ThisAsIEnumerator.MoveNext();
}
else { // outer cannot advance, this is the end
return false;
}
}
void IEnumerator.Reset() {
InternalCleanUp();
InternalInit();
}
#endregion
#region methods private
// helper properties to retrieve an explicit interface-casted this
private IEnumerator ThisAsIEnumerator => this;
private IEnumerator<T> ThisAsTypedIEnumerator => this;
private void InternalInit() {
var filterBuilder = new FilterDefinitionBuilder<T>();
_cursor = _collection.Find(filterBuilder.Empty).ToCursor();
}
private void InternalCleanUp() {
if (_currentBatchEnumerator != null) {
_currentBatchEnumerator.Reset();
_currentBatchEnumerator = null;
}
if (_cursor != null) {
_cursor.Dispose();
_cursor = null;
}
}
#endregion
#region IDisposable implementation
private bool disposedValue = false; // To detect redundant calls
private void InternalDispose(bool disposing) {
if (!disposedValue) {
if (disposing) {
InternalCleanUp();
_collection = null;
}
disposedValue = true;
}
}
void IDisposable.Dispose() {
InternalDispose(true);
}
#endregion
}

Related

How to handle an IEnumerable of IDisposable objects not knowing if the results are yield or not? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm looking for best practices / standard on how to deal with this situation.
We have our code (MyClass) that consumes another class (ItemGenerator).
ItemGenerator is a blackbox for us so we don't know the implementation (we do but we don't want to rely on that because it could change from underneath).
ItemGenerator has a method, GetItems(), that returns an IEnumerable of Item.
Item class implements IDisposable so we should dispose of the object when we are done.
When we (MyClass) iterates through the list of items, if an exception (any exception) occurs, we want to stop processing and release control (bubble up the exception).
My question is this:
Should we keep iterating through the items in order to dispose of all of them? It might seem silly but what happens with the rest of the items if they are not disposed of?
At the same time, based on the code below, we should definitely not iterate through the rest of the items because they are yield return. So why generate them just so we can dispose of them (it could significantly affect the performance).
The problem is that we do not know if GetItems() returns the items on demand (yield) or not. And I don't think we should care, right?
So how should we handle the situation when an exception occurs in the middle of the list (for example)?
Below is an example of the code that illustrates the gist of it.
This is our code:
public class MyClass
{
public void VerifyAllItems()
{
ItemGenerator generator = new ItemGenerator();
foreach (Item item in generator.GetItems())
{
try
{
// Do some work with "item" here. Though an exception could occur.
// If an exception occurs, we don't care about processing the rest of the items and just want to bubble up the exception
}
finally
{
// Always dispose of the
item?.Dispose();
}
}
}
}
And this is the blackbox code
public class ItemGenerator
{
private long _itemsToGenerate = 0;
public ItemGenerator()
{
_itemsToGenerate = new Random().Next(10, 100);
}
public IEnumerable<Item> GetItems()
{
while (_itemsToGenerate > 0)
{
yield return HeavyWork();
_itemsToGenerate--;
}
}
private Item HeavyWork()
{
// Doing a lot of work here
return new Item();
}
}
public class Item : IDisposable
{
private bool _isDisposed = false;
public virtual void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool isDisposing)
{
if (!_isDisposed)
{
if (isDisposing)
{
// Dispose of any resources
}
_isDisposed = true;
}
}
}
The problem is worse than you state. Not only can you not be sure whether to enumerate the collection, you can't be sure whether to dispose of any of the items at all. Just because something implements IDisposable doesn't mean you should be disposing it, e.g. if you have a factory that always returns the same instance of something.
This kind of a problem is exactly why the code that allocates something is generally held responsible for deallocating it. In this case, the ItemGenerator creates the items, so it should dispose them.
class ItemGenerator : IDiposable
{
protected readonly List<Item> _instances = new List<Item>();
IEnumerable<Item> GetItems()
{
for ( some; condition; here; )
{
var item = new Item();
_instances.Add(item);
yield return item;
}
}
public void Dispose()
{
foreach (var item in _instances) item.Dispose();
}
}
Now all you have to do is put your ItemGenerator in a using block and you're good to go.
public void VerifyAllItems()
{
using (ItemGenerator generator = new ItemGenerator())
{
foreach (Item item in generator.GetItems())
{
try
{
// Do some work with "item" here. Though an exception could occur.
// If an exception occurs, we don't care about processing the rest of the items and just want to bubble up the exception
}
finally
{
//Don't need to dispose anything here
}
}
} //Disposal happens here because of the using statement
}
With this patterns, any items that were allocated by the ItemGenerator will get disposed when you exit the using block.
Now the caller doesn't have to care about the implementation of the item generator at all, or worry about disposing anything other than the generator itself. And of course you should dispose the generator if you're the one who allocated it.
Slightly bizarre, but...
internal class DisposingEnumerator : IEnumerator<Item>
{
private readonly List<IDisposable> deallocationQueue = new List<IDisposable>();
private readonly IEnumerable<Item> source;
private IEnumerator<Item> sourceEnumerator;
public DisposingEnumerator(IEnumerable<Item> source)
{
this.source = source;
}
public bool MoveNext()
{
if (sourceEnumerator == null)
{
sourceEnumerator = source.GetEnumerator();
}
bool hasNext = sourceEnumerator.MoveNext();
if (hasNext)
{
deallocationQueue.Add(Current);
}
return hasNext;
}
public Item Current => sourceEnumerator.Current;
object IEnumerator.Current => Current;
public void Reset()
{
throw new NotSupportedException();
}
// Will be called within "foreach" statement
// You can implement IDisposable in ItemCollection as well
public void Dispose()
{
foreach (var item in deallocationQueue)
{
item.Dispose();
}
}
}
public class ItemCollection : IEnumerable<Item>
{
private IEnumerator<Item> enumerator;
public ItemCollection(IEnumerable<Item> source)
{
this.enumerator = new DisposingEnumerator(source);
}
public IEnumerator<Item> GetEnumerator()
{
return enumerator;
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Usage:
var items = generator.GetItems();
// Equivalent of using statement
foreach (var item in new ItemCollection(items))
{
}
Proof

Looking for a way to do less locking while caching

I am using the code below to cache items. It's pretty basic.
The issue I have is that every time it caches an item, section of the code locks. So with roughly a million items arriving every hour or so, this is a problem.
I've tried creating a dictionary of static lock objects per cacheKey, so that locking is granular, but that in itself becomes an issue with managing expiration of them, etc...
Is there a better way to implement minimal locking?
private static readonly object cacheLock = new object();
public static T GetFromCache<T>(string cacheKey, Func<T> GetData) where T : class {
// Returns null if the string does not exist, prevents a race condition
// where the cache invalidates between the contains check and the retrieval.
T cachedData = MemoryCache.Default.Get(cacheKey) as T;
if (cachedData != null) {
return cachedData;
}
lock (cacheLock) {
// Check to see if anyone wrote to the cache while we where
// waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey) as T;
if (cachedData != null) {
return cachedData;
}
// The value still did not exist so we now write it in to the cache.
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, new CacheItemPolicy(...));
return cachedData;
}
}
You may want to consider using ReaderWriterLockSlim, which you can obtain write lock only when needed.
Using cacheLock.EnterReadLock(); and cacheLock.EnterWriteLock(); should greatly improve the performance.
That link I gave even have an example of a cache, exactly what you need, I copy here:
public class SynchronizedCache
{
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
private Dictionary<int, string> innerCache = new Dictionary<int, string>();
public int Count
{ get { return innerCache.Count; } }
public string Read(int key)
{
cacheLock.EnterReadLock();
try
{
return innerCache[key];
}
finally
{
cacheLock.ExitReadLock();
}
}
public void Add(int key, string value)
{
cacheLock.EnterWriteLock();
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
}
public bool AddWithTimeout(int key, string value, int timeout)
{
if (cacheLock.TryEnterWriteLock(timeout))
{
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
return true;
}
else
{
return false;
}
}
public AddOrUpdateStatus AddOrUpdate(int key, string value)
{
cacheLock.EnterUpgradeableReadLock();
try
{
string result = null;
if (innerCache.TryGetValue(key, out result))
{
if (result == value)
{
return AddOrUpdateStatus.Unchanged;
}
else
{
cacheLock.EnterWriteLock();
try
{
innerCache[key] = value;
}
finally
{
cacheLock.ExitWriteLock();
}
return AddOrUpdateStatus.Updated;
}
}
else
{
cacheLock.EnterWriteLock();
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
return AddOrUpdateStatus.Added;
}
}
finally
{
cacheLock.ExitUpgradeableReadLock();
}
}
public void Delete(int key)
{
cacheLock.EnterWriteLock();
try
{
innerCache.Remove(key);
}
finally
{
cacheLock.ExitWriteLock();
}
}
public enum AddOrUpdateStatus
{
Added,
Updated,
Unchanged
};
~SynchronizedCache()
{
if (cacheLock != null) cacheLock.Dispose();
}
}
I don't know how MemoryCache.Default is implemented, or whether or not you have control over it.
But in general, prefer using ConcurrentDictionary over Dictionary with lock in a multi threaded environment.
GetFromCache would just become
ConcurrentDictionary<string, T> cache = new ConcurrentDictionary<string, T>();
...
cache.GetOrAdd("someKey", (key) =>
{
var data = PullDataFromDatabase(key);
return data;
});
There are two more things to take care about.
Expiry
Instead of saving T as the value of the dictionary, you can define a type
struct CacheItem<T>
{
public T Item { get; set; }
public DateTime Expiry { get; set; }
}
And store the cache as a CacheItem with a defined expiry.
cache.GetOrAdd("someKey", (key) =>
{
var data = PullDataFromDatabase(key);
return new CacheItem<T>() { Item = data, Expiry = DateTime.UtcNow.Add(TimeSpan.FromHours(1)) };
});
Now you can implement expiration in an asynchronous thread.
Timer expirationTimer = new Timer(ExpireCache, null, 60000, 60000);
...
void ExpireCache(object state)
{
var needToExpire = cache.Where(c => DateTime.UtcNow >= c.Value.Expiry).Select(c => c.Key);
foreach (var key in needToExpire)
{
cache.TryRemove(key, out CacheItem<T> _);
}
}
Once a minute, you search for all cache entries that need to be expired, and remove them.
"Locking"
Using ConcurrentDictionary guarantees that simultaneous read/writes won't corrupt the dictionary or throw an exception.
But, you can still end up with a situation where two simultaneous reads cause you to fetch the data from the database twice.
One neat trick to solve this is to wrap the value of the dictionary with Lazy
ConcurrentDictionary<string, Lazy<CacheItem<T>>> cache = new ConcurrentDictionary<string, Lazy<CacheItem<T>>>();
...
var data = cache.GetOrData("someKey", key => new Lazy<CacheItem<T>>(() =>
{
var data = PullDataFromDatabase(key);
return new CacheItem<T>() { Item = data, Expiry = DateTime.UtcNow.Add(TimeSpan.FromHours(1)) };
})).Value;
Explanation
with GetOrAdd you might end up invoking the "get from database if not in cache" delegate multiple times in the case of simultaneous requests.
However, GetOrAdd will end up using only one of the values that the delegate returned, and by returning a Lazy, you guaranty that only one Lazy will get invoked.

LINQ continue after Take

Say we have an IEnumerable<T> stuff;
Is there a concise way to Take n elements and then another m elements after the first, without re-evaluating?
example code:
stuff.Take(10);
stuff.Skip(10).Take(20); // re-evaluates stuff
What I was thinking was maybe this (not working code)
var it = stuff.GetEnumerator();
it.Take(10);
it.Take(20);
Edit to add to the difficulty and to clarify the complexity of what I would like to accomplish: I want to continue the query after the Take, i.e.
it.Take(10);
var cont = it.Select(Mutate);
cont.Take(20);
cont = cont.Where(Filter);
cont.Take(5);
You can use the Publish extension method in the System.Interactive NuGet package put out by Microsoft to accomplish this. This is a fantastic library that provides some 'missing' LINQ functions. From the documentation, the Publish method:
Creates a buffer with a view over the source sequence, causing each enumerator to obtain access to the remainder of the sequence from the current index in the buffer.
I.e. it allows you to partially enumerate a sequence and the next time you enumerate the sequence you will pick up where the previous enumeration left off.
var publishedSource = stuff.Publish();
var firstTenItems = publishedSource.Take(10).ToArray();
var nextTwentyTransformedItems = publishedSource.Take(20).Select(Mutate).ToArray();
// How you apply 'Where' depends on what you want to achieve.
// This returns the next 5 items that match the filter but if there are less
// than 5 items that match the filter you could end up enumerating the
// entire remainder of the sequence.
var nextFiveFilteredItems = publishedSource.Where(Filter).Take(5).ToArray();
// This enumerates _only_ the next 5 items and yields any that match the filter.
var nextOfFiveItemsThatPassFilter = publishedSource.Take(5).Where(Filter).ToArray()
If you want to just create a wrapper for IEnumerable that will handle any LINQ appended on and take one pass through the source, use this class and extension:
public static class EnumerableOnceExt {
public static EnumerableOnce<IEnumerable<T>, T> EnumerableOnce<T>(this IEnumerable<T> src) => new EnumerableOnce<IEnumerable<T>, T>(src);
}
public class EnumerableOnce<T, V> : IEnumerable<V>, IDisposable where T : IEnumerable<V> {
EnumeratorOnce<V> onceEnum;
public EnumerableOnce(T src) {
onceEnum = new EnumeratorOnce<V>(src.GetEnumerator());
}
public IEnumerator<V> GetEnumerator() {
return onceEnum;
}
IEnumerator IEnumerable.GetEnumerator() {
return onceEnum;
}
public void DoSkip(int n) {
while (n > 0 && onceEnum.MoveNext())
--n;
}
public void DoTake(int n) {
while (n > 0 && onceEnum.MoveNext())
--n;
}
#region IDisposable Support
private bool disposedValue = false; // To detect redundant calls
protected virtual void Dispose(bool disposing) {
if (!disposedValue) {
if (disposing) {
onceEnum.ActuallyDispose();
}
disposedValue = true;
}
}
// This code added to correctly implement the disposable pattern.
public void Dispose() {
Dispose(true);
}
#endregion
}
public class EnumeratorOnce<V> : IEnumerator<V> {
IEnumerator<V> origEnum;
public EnumeratorOnce(IEnumerator<V> src) {
origEnum = src;
}
public V Current => origEnum.Current;
object IEnumerator.Current => origEnum.Current;
public bool MoveNext() => origEnum.MoveNext();
public void Reset() {
origEnum.Reset();
}
public void ActuallyDispose() {
origEnum.Dispose();
}
#region IDisposable Support
protected virtual void Dispose(bool disposing) {
// don't allow disposing early
}
// This code added to correctly implement the disposable pattern.
public void Dispose() {
Dispose(true);
}
#endregion
}
Now your sample code will work if you call EnumerableOnce() to wrap the source, as long as you execute the enumerations:
var it1 = it.EnumerableOnce();
it1.Take(10).ToList();
var #continue = it1.Select(Mutate);
#continue.Take(20).ToList();
#continue = #continue.Where(Filter);
#continue.Take(5).ToList();
You can also add new methods to EnumerableOnce:
public void DoSkip(int n) {
while (n > 0 && srcEnum.MoveNext())
--n;
}
public void DoTake(int n) {
while (n > 0 && srcEnum.MoveNext())
--n;
}
And call them:
var it1 = it.EnumerableOnce();
it1.DoTake(10);
var #continue = it1.Select(Mutate);
#continue.DoSkip(20);
#continue = #continue.Where(Filter);
#continue.DoTake(5);

ISupportIncrementalLoading retrieving exception

I have implemented a ISupportIncrementalLoading interface to perform the incremental loading of a ListView.
The interface has the following code:
public interface IIncrementalSource<T>
{
Task<IEnumerable<T>> GetPagedItems(int pageIndex, int pageSize);
}
public class IncrementalLoadingCollection<T, I> : ObservableCollection<I>,
ISupportIncrementalLoading where T : IIncrementalSource<I>, new()
{
private T source;
private int itemsPerPage;
private bool hasMoreItems;
private int currentPage;
public IncrementalLoadingCollection(int itemsPerPage = 10)
{
this.source = new T();
this.itemsPerPage = itemsPerPage;
this.hasMoreItems = true;
}
public void UpdateItemsPerPage(int newItemsPerPage)
{
this.itemsPerPage = newItemsPerPage;
}
public bool HasMoreItems
{
get { return hasMoreItems; }
}
public IAsyncOperation<LoadMoreItemsResult> LoadMoreItemsAsync(uint count)
{
return Task.Run<LoadMoreItemsResult>(
async () =>
{
uint resultCount = 0;
var dispatcher = Window.Current.Dispatcher;
var result = await source.GetPagedItems(currentPage++, itemsPerPage);
if(result == null || result.Count() == 0)
{
hasMoreItems = false;
} else
{
resultCount = (uint)result.Count();
await Task.WhenAll(Task.Delay(10), dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
foreach (I item in result)
this.Add(item);
}).AsTask());
}
return new LoadMoreItemsResult() { Count = resultCount };
}).AsAsyncOperation<LoadMoreItemsResult>();
}
}
The instance of the interface is this one:
var collection = new IncrementalLoadingCollection<LiveTextCode, LiveText>();
this.LTextLW.ItemsSource = collection;
Where LiveText is a UserForm and LiveTextCode is a class that, among other functionalities, sets the previous UserForm up.
The UserForm is filled by reading XML files located in a server so the code must perform async operations and, for that, the containing scope has to be also. Due to an unknown reason, the instance of the custom interface is called before it's filling so, I'm getting a NullReferenceException (or at least that the hypothesis that makes most sense to me...).
I'm pretty lost and I don't know how to fix it, if anyone could help it would be much appreciated.
Thanks in advance!
Instead of using this.LTextLW.ItemsSource = collection;
Specify an ObservableCollection item say collection. Now bind this to your listview by binding it to your ItemsSource="{Binding collection}".
Since its an ObservableCollection type as soon as your collection value gets updated it will be reflected in your View also.
Else you can also specify a collection with a RaisePropertyChanged Event
private IncrementalLoadingCollection<LiveTextCode, LiveText> _collection;
public IncrementalLoadingCollection<LiveTextCode, LiveText> collection
{
get { return _collection; }
set
{
_collection = value;
RaisePropertyChanged();
}
}
This will handle updation of UI whenever the value changes.

implement ienumerable with ienumerable<T>

What is the need to implement IEnumerable (Non-generic) with IEnumerable (Generic) interface in a generic collection class?
A code example on msdn states (Link - http://msdn.microsoft.com/en-us/library/9eekhta0(v=vs.110).aspx)
public class App
{
// Excercise the Iterator and show that it's more
// performant.
public static void Main()
{
TestStreamReaderEnumerable();
TestReadingFile();
}
public static void TestStreamReaderEnumerable()
{
// Check the memory before the iterator is used.
long memoryBefore = GC.GetTotalMemory(true);
// Open a file with the StreamReaderEnumerable and check for a string.
var stringsFound =
from line in new StreamReaderEnumerable(#"c:\\temp\\tempFile.txt")
where line.Contains("string to search for")
select line;
Console.WriteLine("Found: " + stringsFound.Count());
// Check the memory after the iterator and output it to the console.
long memoryAfter = GC.GetTotalMemory(false);
Console.WriteLine("Memory Used With Iterator = \t"
+ string.Format(((memoryAfter - memoryBefore) / 1000).ToString(), "n") + "kb");
}
public static void TestReadingFile()
{
long memoryBefore = GC.GetTotalMemory(true);
StreamReader sr = File.OpenText("c:\\temp\\tempFile.txt");
// Add the file contents to a generic list of strings.
List<string> fileContents = new List<string>();
while (!sr.EndOfStream) {
fileContents.Add(sr.ReadLine());
}
// Check for the string.
var stringsFound =
from line in fileContents
where line.Contains("string to search for")
select line;
sr.Close();
Console.WriteLine("Found: " + stringsFound.Count());
// Check the memory after when the iterator is not used, and output it to the console.
long memoryAfter = GC.GetTotalMemory(false);
Console.WriteLine("Memory Used Without Iterator = \t" +
string.Format(((memoryAfter - memoryBefore) / 1000).ToString(), "n") + "kb");
}
}
// A custom class that implements IEnumerable(T). When you implement IEnumerable(T),
// you must also implement IEnumerable and IEnumerator(T).
public class StreamReaderEnumerable : IEnumerable<string>
{
private string _filePath;
public StreamReaderEnumerable(string filePath)
{
_filePath = filePath;
}
// Must implement GetEnumerator, which returns a new StreamReaderEnumerator.
public IEnumerator<string> GetEnumerator()
{
return new StreamReaderEnumerator(_filePath);
}
// Must also implement IEnumerable.GetEnumerator, but implement as a private method.
private IEnumerator GetEnumerator1()
{
return this.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator1();
}
}
// When you implement IEnumerable(T), you must also implement IEnumerator(T),
// which will walk through the contents of the file one line at a time.
// Implementing IEnumerator(T) requires that you implement IEnumerator and IDisposable.
public class StreamReaderEnumerator : IEnumerator<string>
{
private StreamReader _sr;
public StreamReaderEnumerator(string filePath)
{
_sr = new StreamReader(filePath);
}
private string _current;
// Implement the IEnumerator(T).Current publicly, but implement
// IEnumerator.Current, which is also required, privately.
public string Current
{
get
{
if (_sr == null || _current == null)
{
throw new InvalidOperationException();
}
return _current;
}
}
private object Current1
{
get { return this.Current; }
}
object IEnumerator.Current
{
get { return Current1; }
}
// Implement MoveNext and Reset, which are required by IEnumerator.
public bool MoveNext()
{
_current = _sr.ReadLine();
if (_current == null)
return false;
return true;
}
public void Reset()
{
_sr.DiscardBufferedData();
_sr.BaseStream.Seek(0, SeekOrigin.Begin);
_current = null;
}
// Implement IDisposable, which is also implemented by IEnumerator(T).
private bool disposedValue = false;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!this.disposedValue)
{
if (disposing)
{
// Dispose of managed resources.
}
_current = null;
_sr.Close();
_sr.Dispose();
}
this.disposedValue = true;
}
~StreamReaderEnumerator()
{
Dispose(false);
}
// This example displays output similar to the following:
//Found: 2
//Memory Used With Iterator = 33kb
//Found: 2
//Memory Used Without Iterator = 206kb
}
The need is simply because of how the interfaces are implemented. In .NET 1.1, there were no generics, so IEnumerable has no generic support and only exposes object. This introduces boxing (and is why the compiler also supports a pattern-based implementation for foreach, independent of IEnumerable). In C# 2, we got IEnumerable<T>. It is useful to consider all typed iterations as also being untyped, hence:
IEnumerable<T> : IEnumerable
where IEnumerable<T> re-declares GetEnumerator as with the generic type. However, since these are now different methods with different signatures, it needs 2 different implementations.

Categories

Resources