Remove last item of Stack [duplicate] - c#

Is there a built in way to limit the depth of a System.Collection.Generics.Stack? So that if you are at max capacity, pushing a new element would remove the bottom of the stack?
I know I can do it by converting to an array and rebuilding the stack, but I figured there's probably a method on it already.
EDIT: I wrote an extension method:
public static void Trim<T> (this Stack<T> stack, int trimCount)
{
if (stack.Count <= trimCount)
return;
stack = new
Stack<T>
(
stack
.ToArray()
.Take(trimCount)
);
}
So, it returns a new stack upon trimming, but isn't immutability the functional way =)
The reason for this is that I'm storing undo steps for an app in a stack, and I only want to store a finite amount of steps.

What you are looking for is called a dropout stack. AFAIK, the BCL does not contain one, although they are trivial to implement. Typically Undo and Redo functionality relies on such data structures.
They are basically an array, and when you push onto the stack the 'top' of the stack moves around the array. Eventually the top will wrap back to the beginning when the stack is full and replace the 'bottom' of the stack.
Google didn't provide much info on it. This is the best i could find:
(Warning PDF)
http://courses.cs.vt.edu/~cs2704/spring04/projects/DropOutStack.pdf
Here's some boiler plate code, to get you started. I'll let you fill in the rest (sanity checking, count, indexer, etc.)
class DropOutStack<T>
{
private T[] items;
private int top = 0;
public DropOutStack(int capacity)
{
items = new T[capacity];
}
public void Push(T item)
{
items[top] = item;
top = (top + 1) % items.Length;
}
public T Pop()
{
top = (items.Length + top - 1) % items.Length;
return items[top];
}
}

You're actually looking at something similar to circular list implementation. There's a LimitedQueue implementation done by PIEBALDconsult at CodeProject. It's similar to your requirement. You just need to wrap Stack instead of Queue as done by the author. Plus the author also implemented indexer, which is handy if you need to access anything else other than the top stack (to show undo list maybe).
EDIT: The author's implementation also raise an event when the last(first, depends on whether it's a queue or stack) is being removed, so that you can know when something is being thrown away.

I can't see a way. You can inherit from Stack<T>, but there doesn't appear to be anything useful to override.
The easy (if a bit tedious) way would be to wrap the Stack<T> in your own, say, LimitedStack<T>. Then implement the methods you want and pass through to an internal Stack<T>, while including your limiting logic in the Push method and wherever else you need it.
It's a pain to write all those pass-through members, especially if you're implementing all the same interfaces as Stack<T>...but on the other hand, you only have to do it once and then it's done.

I believe you're looking for a (possibly-modified) dequeue - the data structure that allows access from either end.

the solution provided By Greg Dean => dropout stack is very good, but I think the main question, is about removing the bottom of the stack when the stack overflowed
But the provided solution just replace the last item of the stack once the stack was filled, so you do not get a real history,
But to get a real history, you need to shift the list once the list reaches your specific capacity, but this is an expansive operation,
So I think the best solution for this is a linked list
This is my solution to the problem
public class HistoryStack<T>
{
private LinkedList<T> items = new LinkedList<T>();
public List<T> Items => items.ToList();
public int Capacity { get;}
public HistoryStack(int capacity)
{
Capacity = capacity;
}
public void Push(T item)
{
// full
if (items.Count == Capacity)
{
// we should remove first, because some times, if we exceeded the size of the internal array
// the system will allocate new array.
items.RemoveFirst();
items.AddLast(item);
}
else
{
items.AddLast(new LinkedListNode<T>(item));
}
}
public T Pop()
{
if (items.Count == 0)
{
return default;
}
var ls = items.Last;
items.RemoveLast();
return ls == null ? default : ls.Value;
}
}
Test it
var hs = new HistoryStack<int>(5);
hs.Push(1);
hs.Push(2);
hs.Push(3);
hs.Push(4);
hs.Push(5);
hs.Push(6);
hs.Push(7);
hs.Push(8);
var ls = hs.Items;
Console.WriteLine(String.Join(",", ls));
Console.WriteLine(hs.Pop());
Console.WriteLine(hs.Pop());
hs.Push(9);
Console.WriteLine(hs.Pop());
Console.WriteLine(hs.Pop());
Console.WriteLine(hs.Pop());
Console.WriteLine(hs.Pop());
Console.WriteLine(hs.Pop()); // empty
Console.WriteLine(hs.Pop()); // empty
The Result
4,5,6,7,8
8
7
9
6
5
4
0
0

Related

Recursive function in C#

I have a object, let's call it "Friend".
This object has method "GetFriendsOfFriend", that returns a List<Friend>.
Given a user input of say 5, all of the Friends friends and the friends friends friends (you get the point) down in a level of 5 (this can be up to 20).
This may be a lot of calculations, so I don't know if recursion is the best solution.
Does anyone have a smart idea of
1. How to do this recursive function best?
2. How to do it without recursion.
Thanks!
Whilst is is certainly possible to do this without recursion, I don't see a particular problem with what you're trying to do. To prevent things going crazy, it might make sense to set a maximum to prevent your program from dying.
public class Friend
{
public static readonly int MaxDepth = 8; // prevent more than 8 recursions
private List<Friend> myFriends_ = new List<Friend>();
// private implementation
private void InternalFriends(int depth, int currDepth, List<Friend> list)
{
// Add "us"
if(currDepth > 1 && !list.Contains(this))
list.Add(this);
if(currDepth <= depth)
{
foreach(Friend f in myFriends_)
{
if(!list.Contains(f))
f.InternalFriends(depth, depth + 1, list); // we can all private functions here.
}
}
} // eo InternalFriends
public List<Friend> GetFriendsOfFriend(int depth)
{
List<Friend> ret = new List<Friend>();
InternalFriends(depth < MaxDepth ? depth : MaxDepth, 1, ret);
return ret;
} // eo getFriendsOfFriend
} // eo class Friend
EDIT: Fixed an error in the code in that an actual friend would not get added, just "their" friends. This is only necessary when adding friends after a depth of "1" (the first call). I also made use of Contains to check for duplicates.
Here is a non recursive version of this code:
public static void ProcessFriendsOf(string person) {
var toVisit = new Queue<string>();
var seen = new HashSet<string>();
toVisit.Enqueue(person);
seen.Add(person);
while(toVisit.Count > 0) {
var current = toVisit.Dequeue();
//process this friend in some way
foreach(var friend in GetFriendsOfFriend(current)) {
if (!seen.Contains(friend)) {
toVisit.Enqueue(friend);
seen.Add(friend);
}
}
}
}
It avoids infinite loop by keeping a HashSet of all members already seen and not adding a member to be processed more than once.
It visits friends using a Queue, in a way that is known as Breadth-first search. If we use a Stack instead of a Queue, it becomes a Depth-first search, and would behave pretty much the same as a recursive approach (which uses an implicit stack - the call stack).

c# clone a stack

I have few stacks in my code that I use to keep track of my logical location. At certain times I need to duplicate the stacks, but I can't seem to clone them in such way that it preserves the order. I only need shallow duplication (references, not object).
What would be the proper way to do it? Or should I use some other sort of stacks?
NOTE: I saw this post Stack Clone Problem: .NET Bug or Expected Behaviour?, but not sure how to setup clone method for the Stack class.
NOTE #2: I use System.Collection.Generic.Stack
var clonedStack = new Stack<T>(new Stack<T>(oldStack));
You can write this as an extension method as
public static Stack<T> Clone<T>(this Stack<T> stack) {
Contract.Requires(stack != null);
return new Stack<T>(new Stack<T>(stack));
}
This is necessary because the Stack<T> constructor that we are using here is Stack<T>(IEnumerable<T> source) and of course when you iterate over the IEnumerable<T> implementation for a Stack<T> it is going to repeatedly pop items from the stack thus feeding them to you in reverse of the order that you want them to go onto the new stack. Therefore, doing this process twice will result in the stack being in the correct order.
Alternatively:
var clonedStack = new Stack<T>(oldStack.Reverse());
public static Stack<T> Clone<T>(this Stack<T> stack) {
Contract.Requires(stack != null);
return new Stack<T>(stack.Reverse());
}
Again, we have to walk the stack in the reverse order from the output from iterating over the stack.
I doubt there is a performance difference between these two methods.
For those who take care of performance.. There are a few other ways how to iterate through the original stack members without big losses in the performance:
public T[] Stack<T>.ToArray();
public void Stack<T>.CopyTo(T[] array, int arrayIndex);
I wrote a rough program (the link will be provided at the end of the post) to measure performance and added two tests for the already suggested implementations (see Clone1 and Clone2), and two tests for ToArray and CopyTo approaches (see Clone3 and Clone4, both of them use more efficient Array.Reverse method).
public static class StackExtensions
{
public static Stack<T> Clone1<T>(this Stack<T> original)
{
return new Stack<T>(new Stack<T>(original));
}
public static Stack<T> Clone2<T>(this Stack<T> original)
{
return new Stack<T>(original.Reverse());
}
public static Stack<T> Clone3<T>(this Stack<T> original)
{
var arr = original.ToArray();
Array.Reverse(arr);
return new Stack<T>(arr);
}
public static Stack<T> Clone4<T>(this Stack<T> original)
{
var arr = new T[original.Count];
original.CopyTo(arr, 0);
Array.Reverse(arr);
return new Stack<T>(arr);
}
}
The results are:
Clone1: 318,3766 ms
Clone2: 269,2407 ms
Clone3: 50,6025 ms
Clone4: 37,5233 ms - the winner
As we can see, the approach using CopyTo method is 8 times faster and at the same time the implementation is pretty simple and straightforward. Moreover, I did a quick research on a maximum value of stack size: Clone3 and Clone4 tests worked for bigger stack sizes before OutOfMemoryException occurs:
Clone1: 67108765 elements
Clone2: 67108765 elements
Clone3: 134218140 elements
Clone4: 134218140 elements
The results above for Clone1 and Clone2 are smaller due to additional collections that were explicitly/implicitly defined and therefore affected memory consumption. Thus, Clone3 and Clone4 approaches allow to clone a stack instance faster and with less memory allocations. You can gain even better results using Reflection, but it's a different story :)
The full program listing can be found here.
Here's one easy way, using LINQ:
var clone = new Stack<ElementType>(originalStack.Reverse());
You could create an extension method to make this easier:
public static class StackExtensions
{
public static Stack<T> Clone<T>(this Stack<T> source)
{
return new Stack<T>(originalStack.Reverse());
}
}
Usage:
var clone = originalStack.Clone();
If you don't need an actual Stack`1, but just want a quick copy of an existing Stack`1 that you can walk in the same order that Pop returns, another option is to duplicate the Stack`1 into a Queue`1. The elements will then Dequeue in the same order that they'll Pop from the original Stack`1.
using System;
using System.Collections.Generic;
class Test
{
static void Main()
{
var stack = new Stack<string>();
stack.Push("pushed first, pop last");
stack.Push("in the middle");
stack.Push("pushed last, pop first");
var quickCopy = new Queue<string>(stack);
Console.WriteLine("Stack:");
while (stack.Count > 0)
Console.WriteLine(" " + stack.Pop());
Console.WriteLine();
Console.WriteLine("Queue:");
while (quickCopy.Count > 0)
Console.WriteLine(" " + quickCopy.Dequeue());
}
}
Output:
Stack:
pushed last, pop first
in the middle
pushed first, pop last
Queue:
pushed last, pop first
in the middle
pushed first, pop last

how does object of List<string> add the supplied string

Somebody please shed some light on how Add method is implemented for
(how Add method is implemented for List in c#)
listobject.Add();
where List<User> listobject= new List<User>() is the declaration for the object.
I know that using List we can perform many operations on a fly and that too with type safety, but what I wonder is how id add method implemented so that it takes care of all that at run time.
Hope it doesnot copy the object and make adjustment on each add but I will keep my fingers crossed and wait for your reply :)
Using Reflector you can see exactly how its implemented.
public void Add(T item)
{
if (this._size == this._items.Length)
{
this.EnsureCapacity(this._size + 1);
}
this._items[this._size++] = item;
this._version++;
}
Following 'EnsureCapacity' ...
private void EnsureCapacity(int min)
{
if (this._items.Length < min)
{
int num = (this._items.Length == 0) ? 4 : (this._items.Length * 2);
if (num < min)
{
num = min;
}
this.Capacity = num;
}
}
And finally the setter for 'Capacity'
public int Capacity
{
get
{
return this._items.Length;
}
set
{
if (value != this._items.Length)
{
if (value < this._size)
{
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.value, ExceptionResource.ArgumentOutOfRange_SmallCapacity);
}
if (value > 0)
{
T[] destinationArray = new T[value];
if (this._size > 0)
{
Array.Copy(this._items, 0, destinationArray, 0, this._size);
}
this._items = destinationArray;
}
else
{
this._items = List<T>._emptyArray;
}
}
}
}
Internally the List<T> holds the items in an array. The actual implementation (List<string>) is created at compile-time runtime (thanks #Jason for the correction), so internally there will be a string array that holds the items.
For reference types the list will hold a reference to the same object instance that you added. This is true for strings as well. However, note that the string class is immutable, so any time you modify a string, it actually results in a new instance.
string a = "a";
List<string> list = new List<string>();
list.Add(a); // now the item in the list and a refer to the same string instance
a = "b"; // a is now a completely new instance, the list
// is still referring the old one
No it will work on the string reference, otherwise there wouldn't be much point in using it if it always cloned you're objects.
You can actually check this sort of thing within Visual Studio by using the memory screens, it's possible to compare the address of the added item, and the original and you'll see that they point to the same memory location.
As Frederik notes, List uses an array internally. It creates the array with an initial size and as more items are added, if the array is filled to capacity, it is copied into a larger array. That is why if you know ahead of time that a List will contain many strings, it can help to specify its initial capacity in the constructor.
When you remove items from the list, or insert in the middle, it must shift all the elements in the internal array so a List is not particularly optimized for adding/removing many items. You may be better off using a LinkedList which is much more efficient at add/remove operations but gives up the ability to efficiently access elements in the list by position.
Different collections have different implementations that are best suited for certain scenarios. For a good example of how various collections are implemented, I suggest checking out PowerCollections library that was released by Wintellect. Many of the collections are no longer relevant in .NET 3.5/4.0 but they provide some great insight on how to go about implementing collections.
For a List, the Add method will be of generic type T, in this case a string. The content of the list will be a reference to the original variable, so if you modify the string in the list it will modify the original and vice versa.

How to remove a stack item which is not on the top of the stack in C#

Unfortunately an item can only be removed from the stack by "pop". The stack has no "remove" method or something similar, but I have a stack (yes I need a stack!) from which I need to remove some elements between.
Is there a trick to do this?
If you need to remove items that aren't on the top, then you need something other than a stack.
Try making your own implementation of a stack from a List. Then you get to implement your own push and pop functions (add & remove on the list), and your own special PopFromTheMiddle function.
For example
public class ItsAlmostAStack<T>
{
private List<T> items = new List<T>();
public void Push(T item)
{
items.Add(item);
}
public T Pop()
{
if (items.Count > 0)
{
T temp = items[items.Count - 1];
items.RemoveAt(items.Count - 1);
return temp;
}
else
return default(T);
}
public void Remove(int itemAtPosition)
{
items.RemoveAt(itemAtPosition);
}
}
Consider using different container. Maybe a LinkedList.
Then you can use
AddFirst
AddLast
RemoveLast
RemoveFirst
just like pop/push from stack and you can use
Remove
to remove any node from the middle of the list
You could use a LinkedList
List based removal will likely be less efficient.
In removal by reference List based stacks will have O(N) search and O(N) resizing. LinkedList search is O(N) and removal is O(1).
For removal by index, LinkedList should have O(N) traversal and O(1) removal, while List will have O(1) traversal (because it is indexing) and O(N) removal due to resizing.
Besides efficiency, a LinkedList implementation will keep you in the standard library, opening your code to more flexibility and have you writing less.
This should be able to handle Pop, Push, and Remove
public class FIFOStack<T> : LinkedList<T>
{
public T Pop()
{
T first = First();
RemoveFirst();
return first;
}
public void Push(T object)
{
AddFirst(object);
}
//Remove(T object) implemented in LinkedList
}
Perhaps an extension method would work, although, I suspect that a different data structure entirely is really needed.
public static T Remove<T>( this Stack<T> stack, T element )
{
T obj = stack.Pop();
if (obj.Equals(element))
{
return obj;
}
else
{
T toReturn = stack.Remove( element );
stack.Push(obj);
return toReturn;
}
}
In a true stack, this can only be done one way -
Pop all of the items until you remove the one you want, then push them back onto the stack in the appropriate order.
This is not very efficient, though.
If you truly want to remove from any location, I'd recommend building a pseudo-stack from a List, LinkedList or some other collection. This would give you the control to do this easily.
Then it is not a stack right? Stack is LAST in FIRST out.
You will have to write a custom one or choose something else.
Stack temp = new Stack();
object x, y;
While ((x = myStack.Pop()) != ObjectImSearchingFor)
temp.Push(x);
object found = x;
While ((y = temp.Pop()) != null)
myStack.Push(y);
A trick I use in hairy situations is adding a 'deprecated' flag to the items in the stack.
When I want to 'remove' an item, I simply raise that flag (and clean up any resources that are taken by the object).
Then when Pop()ing items I simply check if the flag is raised, and pop again in a loop until a non-deprecated item is found.
do
{
obj = mQueue.Pop();
} while (obj.deprecated);
You can manage your own item-count to know how many 'real' items are still in the queue, and obviously locking should be employed if this is required for multi-threaded solution.
I found that for queues that have constant flow through them - items pushed and popped - it's much more efficient to hanle it this way, its the fastest you can get (paying O(1) for removing an item from the middle) and memory wise if the object that is retained is small, it's mostly irrelevant if the items flow in a reasonable pace.
The constructor of a Stack<> takes IEnumerable<> as a parameter. So it would be possible to perform the following:
myStack = new Stack<item>( myStack.Where(i => i != objectToRemove).Reverse() );
This is non performant in a number of ways.
hmmmm...... I agree with the previous two answers but if you are looking to hack your way just pop and save all elements until you get to the one you want, and the re-push them all
Yes is ugly, badly performing, probably weird code that will need a long comment explaining why, but you could do it....
I came across this question. In my code I created my own extension method:
public static class StackExtensions
{
public static void Remove<T>(this Stack<T> myStack, ICollection<T> elementsToRemove)
{
var reversedStack = new Stack<T>();
while(myStack.Count > 0)
{
var topItem = myStack.Pop();
if (!elementsToRemove.Contains(topItem))
{
reversedStack.Push(topItem);
}
}
while(reversedStack.Count > 0)
{
myStack.Push(reversedStack.Pop());
}
}
}
And I call my method like this:
var removedReportNumbers =
selectedReportNumbersInSession.Except(selectedReportNumbersList).ToList();
selectedReportNumbersInSession.Remove(removedReportNumbers);
I used a list and added some extension methods, eg,
public static IThing Pop(this List<IThing> list)
{
if (list == null || list.Count == 0) return default(IThing);
// get last item to return
var thing = list[list.Count - 1];
// remove last item
list.RemoveAt(list.Count-1);
return thing;
}
public static IThing Peek(this List<IThing> list)
{
if (list == null || list.Count == 0) return default(IThing);
// get last item to return
return list[list.Count - 1];
}
public static void Remove(this List<IThing> list, IThing thing)
{
if (list == null || list.Count == 0) return;
if (!list.Contains(thing)) return;
list.Remove(thing); // only removes the first it finds
}
public static void Insert(this List<IThing> list, int index, IThing thing)
{
if (list == null || index > list.Count || index < 0) return;
list.Insert(index, thing);
}
This still processes the whole stack, but it is an alternative to remove an entry. The way this is written though, if the same entry is in the stack multiple times, it will remove them all.
using System.Collections.Generic;
namespace StackTest
{
class Program
{
static void Main(string[] args)
{
Stack<string> stackOne = new Stack<string>();
stackOne.Push("Five");
stackOne.Push("Four");
stackOne.Push("Three");
stackOne.Push("Two");
stackOne.Push("One");
deleteStackEntry(stackOne, #"Three");
deleteStackEntry(stackOne, #"Five");
}
static void deleteStackEntry(Stack<string> st, string EntryToDelete)
{
// If stack is empty
if (st.Count == 0) return;
// Remove current item
string currEntry = st.Pop();
// Remove other items with recursive call
deleteStackEntry(st, EntryToDelete);
// Put all items back except the one we want to delete
if (currEntry != EntryToDelete)
st.Push(currEntry);
}
}
}

How to initialize a List<T> to a given size (as opposed to capacity)?

.NET offers a generic list container whose performance is almost identical (see Performance of Arrays vs. Lists question). However they are quite different in initialization.
Arrays are very easy to initialize with a default value, and by definition they already have certain size:
string[] Ar = new string[10];
Which allows one to safely assign random items, say:
Ar[5]="hello";
with list things are more tricky. I can see two ways of doing the same initialization, neither of which is what you would call elegant:
List<string> L = new List<string>(10);
for (int i=0;i<10;i++) L.Add(null);
or
string[] Ar = new string[10];
List<string> L = new List<string>(Ar);
What would be a cleaner way?
EDIT: The answers so far refer to capacity, which is something else than pre-populating a list. For example, on a list just created with a capacity of 10, one cannot do L[2]="somevalue"
EDIT 2: People wonder why I want to use lists this way, as it is not the way they are intended to be used. I can see two reasons:
One could quite convincingly argue that lists are the "next generation" arrays, adding flexibility with almost no penalty. Therefore one should use them by default. I'm pointing out they might not be as easy to initialize.
What I'm currently writing is a base class offering default functionality as part of a bigger framework. In the default functionality I offer, the size of the List is known in advanced and therefore I could have used an array. However, I want to offer any base class the chance to dynamically extend it and therefore I opt for a list.
List<string> L = new List<string> ( new string[10] );
I can't say I need this very often - could you give more details as to why you want this? I'd probably put it as a static method in a helper class:
public static class Lists
{
public static List<T> RepeatedDefault<T>(int count)
{
return Repeated(default(T), count);
}
public static List<T> Repeated<T>(T value, int count)
{
List<T> ret = new List<T>(count);
ret.AddRange(Enumerable.Repeat(value, count));
return ret;
}
}
You could use Enumerable.Repeat(default(T), count).ToList() but that would be inefficient due to buffer resizing.
Note that if T is a reference type, it will store count copies of the reference passed for the value parameter - so they will all refer to the same object. That may or may not be what you want, depending on your use case.
EDIT: As noted in comments, you could make Repeated use a loop to populate the list if you wanted to. That would be slightly faster too. Personally I find the code using Repeat more descriptive, and suspect that in the real world the performance difference would be irrelevant, but your mileage may vary.
Use the constructor which takes an int ("capacity") as an argument:
List<string> = new List<string>(10);
EDIT: I should add that I agree with Frederik. You are using the List in a way that goes against the entire reasoning behind using it in the first place.
EDIT2:
EDIT 2: What I'm currently writing is a base class offering default functionality as part of a bigger framework. In the default functionality I offer, the size of the List is known in advanced and therefore I could have used an array. However, I want to offer any base class the chance to dynamically extend it and therefore I opt for a list.
Why would anyone need to know the size of a List with all null values? If there are no real values in the list, I would expect the length to be 0. Anyhow, the fact that this is cludgy demonstrates that it is going against the intended use of the class.
Create an array with the number of items you want first and then convert the array in to a List.
int[] fakeArray = new int[10];
List<int> list = fakeArray.ToList();
If you want to initialize the list with N elements of some fixed value:
public List<T> InitList<T>(int count, T initValue)
{
return Enumerable.Repeat(initValue, count).ToList();
}
Why are you using a List if you want to initialize it with a fixed value ?
I can understand that -for the sake of performance- you want to give it an initial capacity, but isn't one of the advantages of a list over a regular array that it can grow when needed ?
When you do this:
List<int> = new List<int>(100);
You create a list whose capacity is 100 integers. This means that your List won't need to 'grow' until you add the 101th item.
The underlying array of the list will be initialized with a length of 100.
This is an old question, but I have two solutions. One is fast and dirty reflection; the other is a solution that actually answers the question (set the size not the capacity) while still being performant, which none of the answers here do.
Reflection
This is quick and dirty, and should be pretty obvious what the code does. If you want to speed it up, cache the result of GetField, or create a DynamicMethod to do it:
public static void SetSize<T>(this List<T> l, int newSize) =>
l.GetType().GetField("_size", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(l, newSize);
Obviously a lot of people will be hesitant to put such code into production.
ICollection<T>
This solution is based around the fact that the constructor List(IEnumerable<T> collection) optimizes for ICollection<T> and immediately adjusts the size to the correct amount, without iterating it. It then calls the collections CopyTo to do the copy.
The code for the List<T> constructor is as follows:
public List(IEnumerable<T> collection) {
....
ICollection<T> c = collection as ICollection<T>;
if (collection is ICollection<T> c)
{
int count = c.Count;
if (count == 0)
{
_items = s_emptyArray;
}
else {
_items = new T[count];
c.CopyTo(_items, 0);
_size = count;
}
}
So we can completely optimally pre-initialize the List to the correct size, without any extra copying.
How so? By creating an ICollection<T> object that does nothing other than return a Count. Specifically, we will not implement anything in CopyTo which is the only other function called.
private struct SizeCollection<T> : ICollection<T>
{
public SizeCollection(int size) =>
Count = size;
public void Add(T i){}
public void Clear(){}
public bool Contains(T i)=>true;
public void CopyTo(T[]a, int i){}
public bool Remove(T i)=>true;
public int Count {get;}
public bool IsReadOnly=>true;
public IEnumerator<T> GetEnumerator()=>null;
IEnumerator IEnumerable.GetEnumerator()=>null;
}
public List<T> InitializedList<T>(int size) =>
new List<T>(new SizeCollection<T>(size));
We could in theory do the same thing for AddRange/InsertRange for an existing array, which also accounts for ICollection<T>, but the code there creates a new array for the supposed items, then copies them in. In such case, it would be faster to just empty-loop Add:
public void SetSize<T>(this List<T> l, int size)
{
if(size < l.Count)
l.RemoveRange(size, l.Count - size);
else
for(size -= l.Count; size > 0; size--)
l.Add(default(T));
}
Initializing the contents of a list like that isn't really what lists are for. Lists are designed to hold objects. If you want to map particular numbers to particular objects, consider using a key-value pair structure like a hash table or dictionary instead of a list.
You seem to be emphasizing the need for a positional association with your data, so wouldn't an associative array be more fitting?
Dictionary<int, string> foo = new Dictionary<int, string>();
foo[2] = "string";
The accepted answer (the one with the green check mark) has an issue.
The problem:
var result = Lists.Repeated(new MyType(), sizeOfList);
// each item in the list references the same MyType() object
// if you edit item 1 in the list, you are also editing item 2 in the list
I recommend changing the line above to perform a copy of the object. There are many different articles about that:
String.MemberwiseClone() method called through reflection doesn't work, why?
https://code.msdn.microsoft.com/windowsdesktop/CSDeepCloneObject-8a53311e
If you want to initialize every item in your list with the default constructor, rather than NULL, then add the following method:
public static List<T> RepeatedDefaultInstance<T>(int count)
{
List<T> ret = new List<T>(count);
for (var i = 0; i < count; i++)
{
ret.Add((T)Activator.CreateInstance(typeof(T)));
}
return ret;
}
You can use Linq to cleverly initialize your list with a default value. (Similar to David B's answer.)
var defaultStrings = (new int[10]).Select(x => "my value").ToList();
Go one step farther and initialize each string with distinct values "string 1", "string 2", "string 3", etc:
int x = 1;
var numberedStrings = (new int[10]).Select(x => "string " + x++).ToList();
string [] temp = new string[] {"1","2","3"};
List<string> temp2 = temp.ToList();
After thinking again, I had found the non-reflection answer to the OP question, but Charlieface beat me to it. So I believe that the correct and complete answer is https://stackoverflow.com/a/65766955/4572240
My old answer:
If I understand correctly, you want the List<T> version of new T[size], without the overhead of adding values to it.
If you are not afraid the implementation of List<T> will change dramatically in the future (and in this case I believe the probability is close to 0), you can use reflection:
public static List<T> NewOfSize<T>(int size) {
var list = new List<T>(size);
var sizeField = list.GetType().GetField("_size",BindingFlags.Instance|BindingFlags.NonPublic);
sizeField.SetValue(list, size);
return list;
}
Note that this takes into account the default functionality of the underlying array to prefill with the default value of the item type. All int arrays will have values of 0 and all reference type arrays will have values of null. Also note that for a list of reference types, only the space for the pointer to each item is created.
If you, for some reason, decide on not using reflection, I would have liked to offer an option of AddRange with a generator method, but underneath List<T> just calls Insert a zillion times, which doesn't serve.
I would also like to point out that the Array class has a static method called ResizeArray, if you want to go the other way around and start from Array.
To end, I really hate when I ask a question and everybody points out that it's the wrong question. Maybe it is, and thanks for the info, but I would still like an answer, because you have no idea why I am asking it. That being said, if you want to create a framework that has an optimal use of resources, List<T> is a pretty inefficient class for anything than holding and adding stuff to the end of a collection.
A notice about IList:
MSDN IList Remarks:
"IList implementations fall into three categories: read-only, fixed-size, and variable-size. (...). For the generic version of this interface, see
System.Collections.Generic.IList<T>."
IList<T> does NOT inherits from IList (but List<T> does implement both IList<T> and IList), but is always variable-size.
Since .NET 4.5, we have also IReadOnlyList<T> but AFAIK, there is no fixed-size generic List which would be what you are looking for.
This is a sample I used for my unit test. I created a list of class object. Then I used forloop to add 'X' number of objects that I am expecting from the service.
This way you can add/initialize a List for any given size.
public void TestMethod1()
{
var expected = new List<DotaViewer.Interface.DotaHero>();
for (int i = 0; i < 22; i++)//You add empty initialization here
{
var temp = new DotaViewer.Interface.DotaHero();
expected.Add(temp);
}
var nw = new DotaHeroCsvService();
var items = nw.GetHero();
CollectionAssert.AreEqual(expected,items);
}
Hope I was of help to you guys.
A bit late but first solution you proposed seems far cleaner to me : you dont allocate memory twice.
Even List constrcutor needs to loop through array in order to copy it; it doesn't even know by advance there is only null elements inside.
1.
- allocate N
- loop N
Cost: 1 * allocate(N) + N * loop_iteration
2.
- allocate N
- allocate N + loop ()
Cost : 2 * allocate(N) + N * loop_iteration
However List's allocation an loops might be faster since List is a built-in class, but C# is jit-compiled sooo...

Categories

Resources