I am working in .Net environment using C#. I need some alternative for the Stack data structure. Some kind of bound stack. The quantity of elements in the collection shouldn't exceed some fixed specified number. And, if that number achieved and new element is pushed, than most older element must be deleted. I need this for storing commands for Undo/Redo strategies.
A circular buffer should do the job; easy enough to wrap a list or array, but nothing built in AFAIK.
Johnny Coder has an implementation here: http://johnnycoder.com/blog/2008/01/07/undo-functionality-with-a-limited-stack/
This is an implementation of a stack with a constrained capacity.
After reaching the given capacity, bottom items of the stack beyond the capacity are discarded. It is possible to iterate through the contained objects and set the index to a specifc position (like a rewind) for discarding multiple entries at once when pushing a new item to the stack.
This is an own implementation with some goodies that prevents you from handling more then one list if you need to go back in the history and forward again (is builtin).
public class DiscardingStack<TObject> : IEnumerable<TObject>
{
private readonly int capacity;
private readonly List<TObject> items;
private int index = -1;
public DiscardingStack(int capacity)
{
this.capacity = capacity;
items = new List<TObject>(capacity);
}
public DiscardingStack(int capacity, IEnumerable<TObject> collection)
: this(capacity)
{
foreach (var o in collection)
{
Push(o);
}
}
public DiscardingStack(ICollection<TObject> collection)
: this(collection.Count, collection)
{
}
public void Clear()
{
if (items.Count >= 0)
{
items.Clear();
index = items.Count - 1;
}
}
public int Index
{
get { return index; }
set
{
if (index >= 0 && index < items.Count)
{
index = value;
}
else throw new InvalidOperationException();
}
}
public int Count
{
get { return items.Count; }
}
public TObject Current
{
get { return items[index]; }
set { index = items.IndexOf(value); }
}
public int Capacity
{
get { return capacity; }
}
public TObject Pop()
{
if (items.Count <= 0)
throw new InvalidOperationException();
var i = items.Count - 1;
var removed = items[i];
items.RemoveAt(i);
if (index > i)
index = i;
return removed;
}
public void Push(TObject item)
{
if (index == capacity - 1)
{
items.RemoveAt(0);
index--;
}
else if (index < items.Count - 1)
{
var removeAt = index + 1;
var removeCount = items.Count - removeAt;
items.RemoveRange(removeAt, removeCount);
}
items.Add(item);
index = items.Count - 1;
}
public TObject Peek()
{
return items[items.Count-1];
}
public TObject this[int i]
{
get { return items[i]; }
}
public IEnumerator<TObject> GetEnumerator()
{
return items.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Anyway, building a stack that discards elements when the maximum capacity is reached should be implemented using a LinkedList (as suggested above) if your list is huge (avoids copying). So the LinkedList idea might be better in such a case instead of wrapping a List if the buffer maximum is a high value.
I would also recommend to pack the Push(), Pop() etc. into an interface (e.g. IStack). Sadly, there is no IStack interface predefined in .Net (afaik).
.Net is rather deficient in type of collections. You'll find a collection library here. Use CircularQueue.
There's no builtin Class for this in Framework. (we dont expect to delete data automatically). But you can very well Extend the Stack class and Override Push/Pop and other Methods to suit your needs.
Related
The source code of Enumerator is:
public struct Enumerator : IEnumerator<T>, System.Collections.IEnumerator {
private List<T> list;
private int index;
private int version;
private T current;
...
public bool MoveNext() {
List<T> localList = list; <--------------Q1
if (version == localList._version && ((uint)index < (uint)localList._size)) {
current = localList._items[index];
index++;
return true;
}
return MoveNextRare();
}
private bool MoveNextRare() {
if (version != list._version) {
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumFailedVersion);
}
index = list._size + 1; <-----------------Q2
current = default(T);
return false;
}
void System.Collections.IEnumerator.Reset() {
if (version != list._version) {
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumFailedVersion);
}
index = 0;
current = default(T);
}
...
}
I have some questions on this iterator pattern:
Q1-Why MoveNext method need to define a localList, can't it just use the private field list directly since List<T> is already a reference type, why need to create an alias for it?
Q2- MoveNextRare method will invoke when the index is out of range of the last element in the list, so what's the point to increment it, why not just leave it untouched, because when Reset() calls, index will be set to 0 anyway?
For the first question I don't have any answer, maybe it just an relic from some previous implementation, maybe it somehow improves performance (though I would wonder how and why, so my bet is on the first guess). Also in the Core implementation list field is marked as readonly.
As for the second one - it has nothing to do with Reset, but with System.Collections.IEnumerator.Current implementation:
Object System.Collections.IEnumerator.Current {
get {
if( index == 0 || index == list._size + 1) { // check second comparasion
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumOpCantHappen);
}
return Current;
}
I have a general question about looping in c#, especially when using lists.
I want to implement a simple polygon ear-slicing algorithm.
Here is the algorithm:
(Source: http://www.diva-portal.org/smash/get/diva2:330344/FULLTEXT02
, Page 6)
I already implemented finding the ear tips. But there is the problem that I have to access i-1 or sometimes even i-2 elements of the list. My work around was to add last elements at the top of the list and first elements at the end of the list.
But when its comes to the next step when I have to remove some points from the polygon to go further trough the algorithm this approach is not good at all. So the problem is when I try to access elements at the end of the polygon when working at the beginning =) I hope that makes sense.
Here a snippet so you know what I'm talking about:
// suppose that i = 0 at the first step and polygonPoints is List<Vector>
Vector pi = new Vector(polygonPoints[i - 1]);
Vector pj = new Vector(polygonPoints[i]);
Vector pk = new Vector(polygonPoints[i + 1]);
// create line between i-1 and i+1
Line diagonal = new Line(pi,pk);
I would appreciate any suggetion.
Thanks in advance.
I hope I understood the question correctly: Your problem is to calculate the neighbor indices at the end of your node list, right?
If so, why don't you just calculate the indices using a simple modulo function as that:
int mod(int k, int x)
{
return ((k % x) + x) % x;
}
//...
polygonPoints[mod(i + n, polygonPoints.length)]
where n is your offset.
This means e.g. for a polygon point list with 10 elements, i = 9 and n = 1 that:
mod((9 + 1), 10) = 0
And in particular, that the next neighbor of the node at index 9 is at index 0.
And for i = 0 and n = -1:
mod((0 - 1), 10) = 9
Which means, that the previous node for the node at index 0 is at index position 9.
You could also create a decorator over the collection.
Then you can define the indexer property to handle the out-of-bounds indexes.
This way you don't have to call mod all the time, it will be handled in the indexer. You do have to have a collection that supports indexing or IndexOf, such as List.
using System;
using System.Collections;
using System.Collections.Generic;
class Program
{
private static void Main(string[] args)
{
var list = new List<int> { 1, 2, 3, 4, 5 };
var decoratedList = new OverindexableListDecorator<int>(list);
Console.WriteLine("-1st element is: {0}", decoratedList[-1]);
Console.WriteLine("Element at index 3 is: {0}", decoratedList[3]);
Console.WriteLine("6th element is: {0}", decoratedList[6]);
Console.ReadKey();
}
}
class OverindexableListDecorator<T> : IList<T>
{
private readonly IList<T> store;
public OverindexableListDecorator(IList<T> collectionToWrap)
{
this.store = collectionToWrap;
}
public T this[int index]
{
get
{
int actualIndex = IndexModuloCount(index);
return store[actualIndex];
}
set
{
int actualIndex = IndexModuloCount(index);
store[actualIndex] = value;
}
}
public void RemoveAt(int index)
{
var actualIndex = IndexModuloCount(index);
store.RemoveAt(index);
}
public void Insert(int index, T item)
{
var actualIndex = IndexModuloCount(index);
store.Insert(actualIndex, item);
}
private int IndexModuloCount(int i)
{
int count = this.Count;
return ((i % count) + count) % count;
}
#region Delegate calls
public IEnumerator<T> GetEnumerator()
{
return store.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Add(T item)
{
store.Add(item);
}
public void Clear()
{
store.Clear();
}
public bool Contains(T item)
{
return store.Contains(item);
}
public void CopyTo(T[] array, int arrayIndex)
{
store.CopyTo(array, arrayIndex);
}
public bool Remove(T item)
{
return store.Remove(item);
}
public int Count
{
get { return store.Count; }
}
public bool IsReadOnly
{
get { return store.IsReadOnly; }
}
public int IndexOf(T item)
{
return store.IndexOf(item);
}
#endregion
}
I have been looking for a way of splitting a foreach loop into multiple parts and came across the following code:
foreach(var item in items.Skip(currentPage * itemsPerPage).Take(itemsPerPage))
{
//Do stuff
}
Would items.Skip(currentPage * itemsPerPage).Take(itemsPerPage) be processed in every iteration, or would it be processed once, and have a temporary result used with the foreach loop automatically by the compiler?
No, it would be processed once.
It's the same like:
public IEnumerable<Something> GetData() {
return someData;
}
foreach(var d in GetData()) {
//do something with [d]
}
The foreach construction is equivalent to:
IEnumerator enumerator = myCollection.GetEnumerator();
try
{
while (enumerator.MoveNext())
{
object current = enumerator.Current;
Console.WriteLine(current);
}
}
finally
{
IDisposable e = enumerator as IDisposable;
if (e != null)
{
e.Dispose();
}
}
So, no, myCollection would be processed only once.
Update:
Please note that this depends on the implementation of the IEnumerator that the IEnumerable uses.
In this (evil) example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Collections;
namespace TestStack
{
class EvilEnumerator<T> : IEnumerator<T> {
private IEnumerable<T> enumerable;
private int index = -1;
public EvilEnumerator(IEnumerable<T> e)
{
enumerable = e;
}
#region IEnumerator<T> Membres
public T Current
{
get { return enumerable.ElementAt(index); }
}
#endregion
#region IDisposable Membres
public void Dispose()
{
}
#endregion
#region IEnumerator Membres
object IEnumerator.Current
{
get { return enumerable.ElementAt(index); }
}
public bool MoveNext()
{
index++;
if (index >= enumerable.Count())
return false;
return true;
}
public void Reset()
{
}
#endregion
}
class DemoEnumerable<T> : IEnumerable<T>
{
private IEnumerable<T> enumerable;
public DemoEnumerable(IEnumerable<T> e)
{
enumerable = e;
}
#region IEnumerable<T> Membres
public IEnumerator<T> GetEnumerator()
{
return new EvilEnumerator<T>(enumerable);
}
#endregion
#region IEnumerable Membres
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
#endregion
}
class Program
{
static void Main(string[] args)
{
IEnumerable<int> numbers = Enumerable.Range(0,100);
DemoEnumerable<int> enumerable = new DemoEnumerable<int>(numbers);
foreach (var item in enumerable)
{
Console.WriteLine(item);
}
}
}
}
Each iteration over enumerable would evaluate numbers two times.
Question:
Would items.Skip(currentPage * itemsPerPage).Take(itemsPerPage) be
processed every iteration, or would it be processed once, and have a
temporary result used with the foreach loop automatically by the
compiler?
Answer:
It would be processed once, not every iteration. You can put the collection into a variable to make the foreach more readable. Illustrated below.
foreach(var item in items.Skip(currentPage * itemsPerPage).Take(itemsPerPage))
{
//Do stuff
}
vs.
List<MyClass> query = items.Skip(currentPage * itemsPerPage).Take(itemsPerPage).ToList();
foreach(var item in query)
{
//Do stuff
}
vs.
IEnumerable<MyClass> query = items.Skip(currentPage * itemsPerPage).Take(itemsPerPage);
foreach(var item in query)
{
//Do stuff
}
The code that you present will only iterate the items in the list once, as others have pointed out.
However, that only gives you the items for one page. If you are handling multiple pages, you must be calling that code once for each page (because somewhere you must be incrementing currentPage, right?).
What I mean is that you must be doing something like this:
for (int currentPage = 0; currentPage < numPages; ++currentPage)
{
foreach (var item in items.Skip(currentPage*itemsPerPage).Take(itemsPerPage))
{
//Do stuff
}
}
Now if you do that, then you will be iterating the sequence multiple times - once for each page. The first iteration will only go as far as the end of the first page, but the next will iterate from the beginning to the end of the second page (via the Skip() and the Take()) - and the next will iterate from the beginning to the end of the third page. And so on.
To avoid that you can write an extension method for IEnumerable<T> which partitions the data into batches (which you could also describe as "paginating" the data into "pages").
Rather than just presenting an IEnumerable of IEnumerables, it can be more useful to wrap each batch in a class to supply the batch index along with the items in the batch, like so:
public sealed class Batch<T>
{
public readonly int Index;
public readonly IEnumerable<T> Items;
public Batch(int index, IEnumerable<T> items)
{
Index = index;
Items = items;
}
}
public static class EnumerableExt
{
// Note: Not threadsafe, so not suitable for use with Parallel.Foreach() or IEnumerable.AsParallel()
public static IEnumerable<Batch<T>> Partition<T>(this IEnumerable<T> input, int batchSize)
{
var enumerator = input.GetEnumerator();
int index = 0;
while (enumerator.MoveNext())
yield return new Batch<T>(index++, nextBatch(enumerator, batchSize));
}
private static IEnumerable<T> nextBatch<T>(IEnumerator<T> enumerator, int blockSize)
{
do { yield return enumerator.Current; }
while (--blockSize > 0 && enumerator.MoveNext());
}
}
This extension method does not buffer the data, and it only iterates through it once.
Given this extension method, it becomes more readable to batch up the items. Note that this example enumerates through ALL items for all pages, unlike the OP's example which only iterates through the items for one page:
var items = Enumerable.Range(10, 50); // Pretend we have 50 items.
int itemsPerPage = 20;
foreach (var page in items.Partition(itemsPerPage))
{
Console.Write("Page " + page.Index + " items: ");
foreach (var i in page.Items)
Console.Write(i + " ");
Console.WriteLine();
}
I am using ListView/DataPager.
For performance reasons I page my results at database, using ROW_NUMBER(SQl2005).
At my C# code just comes one page at time. How can I say to DataPager that I have more rows that really are at my List?
I created a class that gerenate fake default(T) objects. Worked fine:
public class PagedList<T> : IEnumerable<T>, ICollection
{
private IEnumerable<T> ActualPage { get; set; }
private int Total { get; set; }
private int StartIndex { get; set; }
public PagedList(int total, int startIndex, IEnumerable<T> actualPage)
{
ActualPage = actualPage;
Total = total;
StartIndex = startIndex;
}
public IEnumerator<T> GetEnumerator()
{
bool passouPagina = false;
for (int i = 0; i < Total; i++)
{
if (i < StartIndex || passouPagina)
{
yield return default(T);
}
else
{
passouPagina = true;
foreach (T itempagina in ActualPage)
{
i++;
yield return itempagina;
}
}
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
#region Implementation of ICollection
void ICollection.CopyTo(Array array, int index)
{
throw new NotSupportedException();
}
public int Count
{
get { return Total; }
}
object ICollection.SyncRoot
{
get { throw new NotSupportedException(); }
}
bool ICollection.IsSynchronized
{
get { throw new NotSupportedException(); }
}
#endregion
}
Usage example:
int totalRows = DB.GetTotalPeople();
int rowIndex = (currentPage-1)*pageSize;
List<Person> peoplePage = DB.GetPeopleAtPage(currentPage);
listview.DataSource = new PagedList(totalRows, rowIndex, peoplePage)
listView.DataBind();
Apparently I can't comment on the above solution, that was provided by Fujiy, however I discovered the following bug:
Inside GetEnumerator() the incrementation in the else branch will always cause the collection to skip one default element, unless you're on the last page of the PagedList.
As an example, if you would create a paged list of 5 elements, with startindex 3 and 1 element per page. This could would enter the else branch for element 2. It would increment i to 3 and then go back into the for-header where it would increment to 4, without creating a default element for i == 3.
i == 1 -> default
i == 2 -> default
i == 3 -> Actual element
i == 4 -> Skipped
i == 5 -> default
A simple solution would be to either use 3 for-loops (one for defaults before the ActualPage, one for the ActualPage and one for elements after the ActualPage). Or to add a i-- after the For-loop inside the Else-branch.
I need to sort some objects according to their contents (in fact according to one of their properties, which is NOT the key and may be duplicated between different objects).
.NET provides two classes (SortedDictionary and SortedList), and both are implemented using a binary tree. The only differences between them are
SortedList uses less memory than SortedDictionary.
SortedDictionary has faster insertion and removal operations for unsorted data, O(log n) as opposed to O(n) for SortedList.
If the list is populated all at once from sorted data, SortedList is faster than SortedDictionary.
I could achieve what I want using a List, and then using its Sort() method with a custom implementation of IComparer, but it would not be time-efficient as I would sort the whole List each time I want to insert a new object, whereas a good SortedList would just insert the item at the right position.
What I need is a SortedList class with a RefreshPosition(int index) to move only the changed (or inserted) object rather than resorting the whole list each time an object inside changes.
Am I missing something obvious ?
Maybe I'm slow, but isn't this the easiest implementation ever?
class SortedList<T> : List<T>
{
public new void Add(T item)
{
Insert(~BinarySearch(item), item);
}
}
http://msdn.microsoft.com/en-us/library/w4e7fxsh.aspx
Unfortunately, Add wasn't overrideable so I had to new it which isn't so nice when you have List<T> list = new SortedList<T>; which I actually needed to do.... so I went ahead and rebuilt the whole thing...
class SortedList<T> : IList<T>
{
private List<T> list = new List<T>();
public int IndexOf(T item)
{
var index = list.BinarySearch(item);
return index < 0 ? -1 : index;
}
public void Insert(int index, T item)
{
throw new NotImplementedException("Cannot insert at index; must preserve order.");
}
public void RemoveAt(int index)
{
list.RemoveAt(index);
}
public T this[int index]
{
get
{
return list[index];
}
set
{
list.RemoveAt(index);
this.Add(value);
}
}
public void Add(T item)
{
list.Insert(~list.BinarySearch(item), item);
}
public void Clear()
{
list.Clear();
}
public bool Contains(T item)
{
return list.BinarySearch(item) >= 0;
}
public void CopyTo(T[] array, int arrayIndex)
{
list.CopyTo(array, arrayIndex);
}
public int Count
{
get { return list.Count; }
}
public bool IsReadOnly
{
get { return false; }
}
public bool Remove(T item)
{
var index = list.BinarySearch(item);
if (index < 0) return false;
list.RemoveAt(index);
return true;
}
public IEnumerator<T> GetEnumerator()
{
return list.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return list.GetEnumerator();
}
}
Or perhaps something like this is a more appropriate Remove function...
public bool Remove(T item)
{
var index = list.BinarySearch(item);
if (index < 0) return false;
while (((IComparable)item).CompareTo((IComparable)list[index]) == 0)
{
if (item == list[index])
{
list.RemoveAt(index);
return true;
}
index++;
}
return false;
}
Assuming items can compare equal but not be equal...
I eventually decided to write it :
class RealSortedList<T> : List<T>
{
public IComparer<T> comparer;
public int SortItem(int index)
{
T item = this[index];
this.RemoveAt(index);
int goodposition=FindLocation(this[index], 0, this.Count);
this.Insert(goodposition, item);
return goodposition;
}
public int FindLocation(T item, int begin, int end)
{
if (begin==end)
return begin;
int middle = begin + end / 2;
int comparisonvalue = comparer.Compare(item, this[middle]);
if (comparisonvalue < 0)
return FindLocation(item,begin, middle);
else if (comparisonvalue > 0)
return FindLocation(item,middle, end);
else
return middle;
}
}
Don't forget that inserting an item into a list backed by an array can be an expensive operation - inserting a bunch of items and then sorting may well be quicker unless you really need to sort after every single operation.
Alternatively, you could always wrap a list and make your add operation find the right place and insert it there.
I've solved this problem in the past by writing an extension method that does a binary search on a IList, and another that does an insert. You can look up the correct implementation in the CLR source because there's a built-in version that works only on arrays, and then just tweak it to be an extension on IList.
One of those "should be in the BCL already" things.
What I need is a SortedList class with
a RefreshPosition(int index) to move
only the changed (or inserted) object
rather than resorting the whole list
each time an object inside changes.
Why would you update using an index when such updates invalidate the index? Really, I would think that updating by object reference would be more convenient. You can do this with the SortedList - just remember that your Key type is the same as the return type of the function that extracts the comparable data form the object.
class UpdateableSortedList<K,V> {
private SortedList<K,V> list = new SortedList<K,V>();
public delegate K ExtractKeyFunc(V v);
private ExtractKeyFunc func;
public UpdateableSortedList(ExtractKeyFunc f) { func = f; }
public void Add(V v) {
list[func(v)] = v;
}
public void Update(V v) {
int i = list.IndexOfValue(v);
if (i >= 0) {
list.RemoveAt(i);
}
list[func(v)] = v;
}
public IEnumerable<T> Values { get { return list.Values; } }
}
Something like that I guess.