How to use DataPager with Database Paged - c#

I am using ListView/DataPager.
For performance reasons I page my results at database, using ROW_NUMBER(SQl2005).
At my C# code just comes one page at time. How can I say to DataPager that I have more rows that really are at my List?

I created a class that gerenate fake default(T) objects. Worked fine:
public class PagedList<T> : IEnumerable<T>, ICollection
{
private IEnumerable<T> ActualPage { get; set; }
private int Total { get; set; }
private int StartIndex { get; set; }
public PagedList(int total, int startIndex, IEnumerable<T> actualPage)
{
ActualPage = actualPage;
Total = total;
StartIndex = startIndex;
}
public IEnumerator<T> GetEnumerator()
{
bool passouPagina = false;
for (int i = 0; i < Total; i++)
{
if (i < StartIndex || passouPagina)
{
yield return default(T);
}
else
{
passouPagina = true;
foreach (T itempagina in ActualPage)
{
i++;
yield return itempagina;
}
}
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
#region Implementation of ICollection
void ICollection.CopyTo(Array array, int index)
{
throw new NotSupportedException();
}
public int Count
{
get { return Total; }
}
object ICollection.SyncRoot
{
get { throw new NotSupportedException(); }
}
bool ICollection.IsSynchronized
{
get { throw new NotSupportedException(); }
}
#endregion
}
Usage example:
int totalRows = DB.GetTotalPeople();
int rowIndex = (currentPage-1)*pageSize;
List<Person> peoplePage = DB.GetPeopleAtPage(currentPage);
listview.DataSource = new PagedList(totalRows, rowIndex, peoplePage)
listView.DataBind();

Apparently I can't comment on the above solution, that was provided by Fujiy, however I discovered the following bug:
Inside GetEnumerator() the incrementation in the else branch will always cause the collection to skip one default element, unless you're on the last page of the PagedList.
As an example, if you would create a paged list of 5 elements, with startindex 3 and 1 element per page. This could would enter the else branch for element 2. It would increment i to 3 and then go back into the for-header where it would increment to 4, without creating a default element for i == 3.
i == 1 -> default
i == 2 -> default
i == 3 -> Actual element
i == 4 -> Skipped
i == 5 -> default
A simple solution would be to either use 3 for-loops (one for defaults before the ActualPage, one for the ActualPage and one for elements after the ActualPage). Or to add a i-- after the For-loop inside the Else-branch.

Related

Understanding Enumerator in List<T>

The source code of Enumerator is:
public struct Enumerator : IEnumerator<T>, System.Collections.IEnumerator {
private List<T> list;
private int index;
private int version;
private T current;
...
public bool MoveNext() {
List<T> localList = list; <--------------Q1
if (version == localList._version && ((uint)index < (uint)localList._size)) {
current = localList._items[index];
index++;
return true;
}
return MoveNextRare();
}
private bool MoveNextRare() {
if (version != list._version) {
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumFailedVersion);
}
index = list._size + 1; <-----------------Q2
current = default(T);
return false;
}
void System.Collections.IEnumerator.Reset() {
if (version != list._version) {
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumFailedVersion);
}
index = 0;
current = default(T);
}
...
}
I have some questions on this iterator pattern:
Q1-Why MoveNext method need to define a localList, can't it just use the private field list directly since List<T> is already a reference type, why need to create an alias for it?
Q2- MoveNextRare method will invoke when the index is out of range of the last element in the list, so what's the point to increment it, why not just leave it untouched, because when Reset() calls, index will be set to 0 anyway?
For the first question I don't have any answer, maybe it just an relic from some previous implementation, maybe it somehow improves performance (though I would wonder how and why, so my bet is on the first guess). Also in the Core implementation list field is marked as readonly.
As for the second one - it has nothing to do with Reset, but with System.Collections.IEnumerator.Current implementation:
Object System.Collections.IEnumerator.Current {
get {
if( index == 0 || index == list._size + 1) { // check second comparasion
ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_EnumOpCantHappen);
}
return Current;
}

How to handle loops when going i-1

I have a general question about looping in c#, especially when using lists.
I want to implement a simple polygon ear-slicing algorithm.
Here is the algorithm:
(Source: http://www.diva-portal.org/smash/get/diva2:330344/FULLTEXT02
, Page 6)
I already implemented finding the ear tips. But there is the problem that I have to access i-1 or sometimes even i-2 elements of the list. My work around was to add last elements at the top of the list and first elements at the end of the list.
But when its comes to the next step when I have to remove some points from the polygon to go further trough the algorithm this approach is not good at all. So the problem is when I try to access elements at the end of the polygon when working at the beginning =) I hope that makes sense.
Here a snippet so you know what I'm talking about:
// suppose that i = 0 at the first step and polygonPoints is List<Vector>
Vector pi = new Vector(polygonPoints[i - 1]);
Vector pj = new Vector(polygonPoints[i]);
Vector pk = new Vector(polygonPoints[i + 1]);
// create line between i-1 and i+1
Line diagonal = new Line(pi,pk);
I would appreciate any suggetion.
Thanks in advance.
I hope I understood the question correctly: Your problem is to calculate the neighbor indices at the end of your node list, right?
If so, why don't you just calculate the indices using a simple modulo function as that:
int mod(int k, int x)
{
return ((k % x) + x) % x;
}
//...
polygonPoints[mod(i + n, polygonPoints.length)]
where n is your offset.
This means e.g. for a polygon point list with 10 elements, i = 9 and n = 1 that:
mod((9 + 1), 10) = 0
And in particular, that the next neighbor of the node at index 9 is at index 0.
And for i = 0 and n = -1:
mod((0 - 1), 10) = 9
Which means, that the previous node for the node at index 0 is at index position 9.
You could also create a decorator over the collection.
Then you can define the indexer property to handle the out-of-bounds indexes.
This way you don't have to call mod all the time, it will be handled in the indexer. You do have to have a collection that supports indexing or IndexOf, such as List.
using System;
using System.Collections;
using System.Collections.Generic;
class Program
{
private static void Main(string[] args)
{
var list = new List<int> { 1, 2, 3, 4, 5 };
var decoratedList = new OverindexableListDecorator<int>(list);
Console.WriteLine("-1st element is: {0}", decoratedList[-1]);
Console.WriteLine("Element at index 3 is: {0}", decoratedList[3]);
Console.WriteLine("6th element is: {0}", decoratedList[6]);
Console.ReadKey();
}
}
class OverindexableListDecorator<T> : IList<T>
{
private readonly IList<T> store;
public OverindexableListDecorator(IList<T> collectionToWrap)
{
this.store = collectionToWrap;
}
public T this[int index]
{
get
{
int actualIndex = IndexModuloCount(index);
return store[actualIndex];
}
set
{
int actualIndex = IndexModuloCount(index);
store[actualIndex] = value;
}
}
public void RemoveAt(int index)
{
var actualIndex = IndexModuloCount(index);
store.RemoveAt(index);
}
public void Insert(int index, T item)
{
var actualIndex = IndexModuloCount(index);
store.Insert(actualIndex, item);
}
private int IndexModuloCount(int i)
{
int count = this.Count;
return ((i % count) + count) % count;
}
#region Delegate calls
public IEnumerator<T> GetEnumerator()
{
return store.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Add(T item)
{
store.Add(item);
}
public void Clear()
{
store.Clear();
}
public bool Contains(T item)
{
return store.Contains(item);
}
public void CopyTo(T[] array, int arrayIndex)
{
store.CopyTo(array, arrayIndex);
}
public bool Remove(T item)
{
return store.Remove(item);
}
public int Count
{
get { return store.Count; }
}
public bool IsReadOnly
{
get { return store.IsReadOnly; }
}
public int IndexOf(T item)
{
return store.IndexOf(item);
}
#endregion
}

Displaying unsorted List's top 20 items

I have an unsorted List<WordCount>
class WordCount
{
string word;
int count;
}
And now I must display the top 20 items in descending order of count. How could I code this efficiently? Currently I would set a minimum integer of -1 (all count >= 1) and do a for loop of 20 iterations with a foreach loop inside. This is an issue though because the last few elements in the List could have count of 1 while the top few may have an element with count 1 so now I am stuck on the pseudocode for this implementation for displaying them in order.
I CANNOT use LINQ or any other things other than the methods for List class. I personally think I must accomplish this feat using Sort() and CompareTo() somehow. This is meant to be a brain twister and that is the reason why it has to be done using the given restriction.
This should work:
List<WordCount> counts = new List<WordCount>();
//Fill the list
var result = counts.OrderBy(c => c.Count).Take(20);
Descending order:
var result = counts.OrderByDescending(c => c.Count).Take(20);
[Edit] Using self-made methods:
Here's a solution without any .NET method. First sort the list using an algorithm, in this case I used the Bubblesort (not effeicient for larger collections). Then I take the 20 first element from the sorted result:
public class WordCount
{
public string Word { get; set; }
public int CharCount { get; set; }
}
public List<WordCount> SortList(List<WordCount> list)
{
WordCount temp;
for (int i = list.Count -1; i >= 1; i--)
{
for (int j = 0; j < list.Count -1; j++)
{
if(list[j].CharCount < list[j+1].CharCount)
{
temp = list[j];
list[j] = list[j+1];
list[j+1] = temp;
}
}
}
return list;
}
public List<WordCount> TakeNItems(int n, List<WordCount> list)
{
List<WordCount> temp = new List<WordCount>();
for(int i = 0; i < n; i++)
temp.Add(list[i]);
return temp;
}
//Usage:
var result = SortList(counts);
result = TakeNItems(20, result);
[Edit2] Using Sort() / CompareTo()
Yes, it is also possible using Sort() and CompareTo(). This requieres a couple of changes to your class because when you try to use Sort() now, you'll get an InvalidOperationException. This is because the WordCount class does not implement the IComparable interface. Implementing the interface means you'll have to override the Equals() and GetHashCode() methods and provide your own comparer. Here's a simple implementation based on the List(T).Sort Method:
public class WordCount : IComparable<WordCount>
{
public string Word { get; set; }
public int CharCount { get; set; }
public override bool Equals(object obj)
{
if (obj == null)
return false;
WordCount wc = obj as WordCount;
return wc == null ? false : Equals(wc);
}
public int CompareTo(WordCount wc)
{
//Descending
return wc == null ? 1 : wc.CharCount.CompareTo(CharCount);
//Ascending
//return wc == null ? 1 : CharCount.CompareTo(wc.CharCount);
}
public override int GetHashCode()
{
return CharCount;
}
public bool Equals(WordCount wc)
{
return wc == null ? false : CharCount.Equals(wc.CharCount);
}
}
//Usage:
List<WordCount> counts = new List<WordCount>();
//Fill the list
counts.Sort();
And for the limit of 20 items you can write your own extension method which would basically do the same as the Enumerable.Take Method:
public static class Extensions
{
public static IEnumerable<T> TakeN<T>(this List<T> list, int n)
{
for(int i = 0; i < n; i++)
yield return list[i];
}
}
//Usage:
List<WordCount> counts = new List<WordCount>();
//Fill the list with 10000 items and call TakeN()
IEnumerable<WordCount> smallList = counts.TakeN(20);
//Or
counts = counts.TakeN(20).ToList();
Hope this clarifies it all! ;)
The most straight-forward solution, using System.Linq:
var words = new List<WordCount>();
var result = from w in words orderby w.count descending select w.word;
result = result.Take(20);
This is most convenient and clear solution, so when possible use Linq. Also the result will be an IEnumerable<WordCount>, so compiler can do optimizations such as lazy enumeration, not calculating all elements until asked for them.

Alternative for the Stack

I am working in .Net environment using C#. I need some alternative for the Stack data structure. Some kind of bound stack. The quantity of elements in the collection shouldn't exceed some fixed specified number. And, if that number achieved and new element is pushed, than most older element must be deleted. I need this for storing commands for Undo/Redo strategies.
A circular buffer should do the job; easy enough to wrap a list or array, but nothing built in AFAIK.
Johnny Coder has an implementation here: http://johnnycoder.com/blog/2008/01/07/undo-functionality-with-a-limited-stack/
This is an implementation of a stack with a constrained capacity.
After reaching the given capacity, bottom items of the stack beyond the capacity are discarded. It is possible to iterate through the contained objects and set the index to a specifc position (like a rewind) for discarding multiple entries at once when pushing a new item to the stack.
This is an own implementation with some goodies that prevents you from handling more then one list if you need to go back in the history and forward again (is builtin).
public class DiscardingStack<TObject> : IEnumerable<TObject>
{
private readonly int capacity;
private readonly List<TObject> items;
private int index = -1;
public DiscardingStack(int capacity)
{
this.capacity = capacity;
items = new List<TObject>(capacity);
}
public DiscardingStack(int capacity, IEnumerable<TObject> collection)
: this(capacity)
{
foreach (var o in collection)
{
Push(o);
}
}
public DiscardingStack(ICollection<TObject> collection)
: this(collection.Count, collection)
{
}
public void Clear()
{
if (items.Count >= 0)
{
items.Clear();
index = items.Count - 1;
}
}
public int Index
{
get { return index; }
set
{
if (index >= 0 && index < items.Count)
{
index = value;
}
else throw new InvalidOperationException();
}
}
public int Count
{
get { return items.Count; }
}
public TObject Current
{
get { return items[index]; }
set { index = items.IndexOf(value); }
}
public int Capacity
{
get { return capacity; }
}
public TObject Pop()
{
if (items.Count <= 0)
throw new InvalidOperationException();
var i = items.Count - 1;
var removed = items[i];
items.RemoveAt(i);
if (index > i)
index = i;
return removed;
}
public void Push(TObject item)
{
if (index == capacity - 1)
{
items.RemoveAt(0);
index--;
}
else if (index < items.Count - 1)
{
var removeAt = index + 1;
var removeCount = items.Count - removeAt;
items.RemoveRange(removeAt, removeCount);
}
items.Add(item);
index = items.Count - 1;
}
public TObject Peek()
{
return items[items.Count-1];
}
public TObject this[int i]
{
get { return items[i]; }
}
public IEnumerator<TObject> GetEnumerator()
{
return items.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Anyway, building a stack that discards elements when the maximum capacity is reached should be implemented using a LinkedList (as suggested above) if your list is huge (avoids copying). So the LinkedList idea might be better in such a case instead of wrapping a List if the buffer maximum is a high value.
I would also recommend to pack the Push(), Pop() etc. into an interface (e.g. IStack). Sadly, there is no IStack interface predefined in .Net (afaik).
.Net is rather deficient in type of collections. You'll find a collection library here. Use CircularQueue.
There's no builtin Class for this in Framework. (we dont expect to delete data automatically). But you can very well Extend the Stack class and Override Push/Pop and other Methods to suit your needs.

Is there a SortedList<T> class in .NET? (not SortedList<K,V>)

I need to sort some objects according to their contents (in fact according to one of their properties, which is NOT the key and may be duplicated between different objects).
.NET provides two classes (SortedDictionary and SortedList), and both are implemented using a binary tree. The only differences between them are
SortedList uses less memory than SortedDictionary.
SortedDictionary has faster insertion and removal operations for unsorted data, O(log n) as opposed to O(n) for SortedList.
If the list is populated all at once from sorted data, SortedList is faster than SortedDictionary.
I could achieve what I want using a List, and then using its Sort() method with a custom implementation of IComparer, but it would not be time-efficient as I would sort the whole List each time I want to insert a new object, whereas a good SortedList would just insert the item at the right position.
What I need is a SortedList class with a RefreshPosition(int index) to move only the changed (or inserted) object rather than resorting the whole list each time an object inside changes.
Am I missing something obvious ?
Maybe I'm slow, but isn't this the easiest implementation ever?
class SortedList<T> : List<T>
{
public new void Add(T item)
{
Insert(~BinarySearch(item), item);
}
}
http://msdn.microsoft.com/en-us/library/w4e7fxsh.aspx
Unfortunately, Add wasn't overrideable so I had to new it which isn't so nice when you have List<T> list = new SortedList<T>; which I actually needed to do.... so I went ahead and rebuilt the whole thing...
class SortedList<T> : IList<T>
{
private List<T> list = new List<T>();
public int IndexOf(T item)
{
var index = list.BinarySearch(item);
return index < 0 ? -1 : index;
}
public void Insert(int index, T item)
{
throw new NotImplementedException("Cannot insert at index; must preserve order.");
}
public void RemoveAt(int index)
{
list.RemoveAt(index);
}
public T this[int index]
{
get
{
return list[index];
}
set
{
list.RemoveAt(index);
this.Add(value);
}
}
public void Add(T item)
{
list.Insert(~list.BinarySearch(item), item);
}
public void Clear()
{
list.Clear();
}
public bool Contains(T item)
{
return list.BinarySearch(item) >= 0;
}
public void CopyTo(T[] array, int arrayIndex)
{
list.CopyTo(array, arrayIndex);
}
public int Count
{
get { return list.Count; }
}
public bool IsReadOnly
{
get { return false; }
}
public bool Remove(T item)
{
var index = list.BinarySearch(item);
if (index < 0) return false;
list.RemoveAt(index);
return true;
}
public IEnumerator<T> GetEnumerator()
{
return list.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return list.GetEnumerator();
}
}
Or perhaps something like this is a more appropriate Remove function...
public bool Remove(T item)
{
var index = list.BinarySearch(item);
if (index < 0) return false;
while (((IComparable)item).CompareTo((IComparable)list[index]) == 0)
{
if (item == list[index])
{
list.RemoveAt(index);
return true;
}
index++;
}
return false;
}
Assuming items can compare equal but not be equal...
I eventually decided to write it :
class RealSortedList<T> : List<T>
{
public IComparer<T> comparer;
public int SortItem(int index)
{
T item = this[index];
this.RemoveAt(index);
int goodposition=FindLocation(this[index], 0, this.Count);
this.Insert(goodposition, item);
return goodposition;
}
public int FindLocation(T item, int begin, int end)
{
if (begin==end)
return begin;
int middle = begin + end / 2;
int comparisonvalue = comparer.Compare(item, this[middle]);
if (comparisonvalue < 0)
return FindLocation(item,begin, middle);
else if (comparisonvalue > 0)
return FindLocation(item,middle, end);
else
return middle;
}
}
Don't forget that inserting an item into a list backed by an array can be an expensive operation - inserting a bunch of items and then sorting may well be quicker unless you really need to sort after every single operation.
Alternatively, you could always wrap a list and make your add operation find the right place and insert it there.
I've solved this problem in the past by writing an extension method that does a binary search on a IList, and another that does an insert. You can look up the correct implementation in the CLR source because there's a built-in version that works only on arrays, and then just tweak it to be an extension on IList.
One of those "should be in the BCL already" things.
What I need is a SortedList class with
a RefreshPosition(int index) to move
only the changed (or inserted) object
rather than resorting the whole list
each time an object inside changes.
Why would you update using an index when such updates invalidate the index? Really, I would think that updating by object reference would be more convenient. You can do this with the SortedList - just remember that your Key type is the same as the return type of the function that extracts the comparable data form the object.
class UpdateableSortedList<K,V> {
private SortedList<K,V> list = new SortedList<K,V>();
public delegate K ExtractKeyFunc(V v);
private ExtractKeyFunc func;
public UpdateableSortedList(ExtractKeyFunc f) { func = f; }
public void Add(V v) {
list[func(v)] = v;
}
public void Update(V v) {
int i = list.IndexOfValue(v);
if (i >= 0) {
list.RemoveAt(i);
}
list[func(v)] = v;
}
public IEnumerable<T> Values { get { return list.Values; } }
}
Something like that I guess.

Categories

Resources