I'm using a List<T> and I need to update the objects properties that the list has.
What would be the most efficient/faster way to do this? I know that scanning through the index of a List<T> would be slower as this list grows and that the List<T> is not the most efficient collection to do updates.
That sad, would be better to:
Remove the match object then add a new one?
Scan through the list indexes until you find the matching object and then update the object's properties?
If I have a collection, let's IEnumerable and I want to update that IEnumerable into the List, what would be best approach.
Stub code sample:
public class Product
{
public int ProductId { get; set; }
public string ProductName { get; set; }
public string Category { get; set; }
}
public class ProductRepository
{
List<Product> product = Product.GetProduct();
public void UpdateProducts(IEnumerable<Product> updatedProduct)
{
}
public void UpdateProduct(Product updatedProduct)
{
}
}
You could consider using Dictionary instead of List if you want fast lookups. In your case it would be the product Id (which I am assuming is unique). Dictionary MSDN
For example:
public class ProductRepository
{
Dictionary<int, Product> products = Product.GetProduct();
public void UpdateProducts(IEnumerable<Product> updatedProducts)
{
foreach(var productToUpdate in updatedProducts)
{
UpdateProduct(productToUpdate);
}
///update code here...
}
public void UpdateProduct(Product productToUpdate)
{
// get the product with ID 1234
if(products.ContainsKey(productToUpdate.ProductId))
{
var product = products[productToUpdate.ProductId];
///update code here...
product.ProductName = productToUpdate.ProductName;
}
else
{
//add code or throw exception if you want here.
products.Add(productToUpdate.ProductId, productToUpdate);
}
}
}
Your use case is updating a List<T>, which can contains millions of records, and updated records can be a sub-list or just a single record
Following is the Schema:
public class Product
{
public int ProductId { get; set; }
public string ProductName { get; set; }
public string Category { get; set; }
}
Does Product contains a primary key, which means every Product object can be uniquely identified and there are no duplicates and every update target a single unique record?
If Yes, then it is best to arrange the List<T> in the form of Dictionary<int,T>, which would mean for an IEnumerable<T> every update would be an O(1) time complexity and that would mean all the updates could be done depending on the size of the IEnumerable<T>, which i don't expect to be very big and though there would be extra memory allocation of different data structure required, but would be a very fast solution.#JamieLupton has already provided a solution on similar lines
In case Product is repeated, there's no primary key, then above solution is not valid, then ideal way to scan through the List<T> is Binary Search, whose time complexity is O(logN)
Now since size of IEnumerable<T> is comparatively small say M, so the overall time complexity would be O(M*logN), where M is much smaller than N and can be neglected.
List<T> support Binary Search API, which provides the element index, which can then be used to update the object at relevant index, check example here
Best Option as per me for such a high number of records would be parallel processing along with binary search
Now since, thread safety is an issue, what I normally do is divide a List<T> into List<T>[], since then each unit can be assigned to a separate thread, a simple way is use MoreLinq batch Api, where you can fetch the number of system processors as using Environment.ProcessorCount and then create IEnumerable<IEnumerable<T>> as follows:
var enumerableList = List<T>.Batch(Environment.ProcessorCount).ToList();
Another way is following custom code:
public static class MyExtensions
{
// data - List<T>
// dataCount - Calculate once and pass to avoid accessing the property everytime
// Size of Partition, which can be function of number of processors
public static List<T>[] SplitList<T>(this List<T> data, int dataCount, int partitionSize)
{
int remainderData;
var fullPartition = Math.DivRem(dataCount, partitionSize, out remainderData);
var listArray = new List<T>[fullPartition];
var beginIndex = 0;
for (var partitionCounter = 0; partitionCounter < fullPartition; partitionCounter++)
{
if (partitionCounter == fullPartition - 1)
listArray[partitionCounter] = data.GetRange(beginIndex, partitionSize + remainderData);
else
listArray[partitionCounter] = data.GetRange(beginIndex, partitionSize);
beginIndex += partitionSize;
}
return listArray;
}
}
Now you can create Task[], where each Task is assigned for every element List<T>, on the List<T>[] generated above, then Binary search for each sub partition. Though its repetitive but would be using the power of Parallel processing and Binary search. Each Task can be started and then we can wait using Task.WaitAll(taskArray) to wait for Task processing to finish
Over and above that, if you want to create a Dictionary<int,T>[] and thus use parallel processing then this would be fastest.
Final integration of List<T>[] to List<T> can be done using Linq Aggregation or SelectMany as follows:
List<T>[] splitListArray = Fetch splitListArray;
// Process splitListArray
var finalList = splitListArray.SelectMany(obj => obj).ToList()
Another option would be to use Parallel.ForEach along with a thread safe data structure like ConcurrentBag<T> or may be ConcurrentDictionary<int,T> in case you are replacing complete object, but if its property update then a simple List<T> would work. Parallel.ForEach internally use range partitioner similar to what I have suggested above
Solutions mentioned above ideally depends on your use case, you shall be able to use combination to achieve the best possible result. Let me know, in case you need specific example
What exactly is efficiency?
Unless there are literally thousands of items doing a foreach, or for or any other type of looping operation will most likely only show differences in the milleseconds. Really? Hence you have wasted more time (in costs of a programmer at $XX per hour than an end user costs) trying to find that best.
So if you have literally thousands of records I would recommend that efficiency be found by parallel processing the list with the Parallel.Foreach method which can process more records to save time with the overhead of threading.
IMHO if the record count is greater than 100 it implies that there is a database being used. If a database is involved, write an update sproc and call it a day; I would be hard pressed to write a one-off program to do a specific update which could be done in an easier fashion in said database.
Related
I have classes as follows:
public class Root
{
public int Id {get;set;}
public string PlayerName{get;set;}
}
public class Scores:Root
{
public int GameT{get;set;}
public int GameZ{get;set;}
}
public class Experience:Root
{
public int ExT{get;set;}
public int ExZ{get;set;}
}
public class Total:Root
{
public int TotalT{get;set;}
public int TotalZ{get;set}
}
TotalT and TotalZ are got from adding GameT, ExT and GameZ, ExZ respectively. I have an observable collection of scores and Experience from which I want to create another collection of Total, here is what I have done so far:
public ObservableCollection<Total> GetTotal(ObservableCollection<Scores> scores,ObservableCollection<Experience> experiences)
{
var tc= new ObservableCollection<Total>();
foreach(var scr in scores)
{
foreach(var exp in experiences)
{
if(scr.Id==exp.Id)
{
var tt= new Total{
Id=scr.Id,
Name=scr.PlayerName,
TotalT=scr.GameT+exp.Ext,
TotalZ=scr.GameZ+exp.Exz
};
tc.Add(tt);
}
}
}
return tc;
}
It works but it is too slow, especially when the records begin to hit hundreds. Is there a better way?
It looks like you just want a LINQ inner join:
var query = from score in scores
join exp in experiences on score.Id equals exp.Id
select new Total {
Id = score.Id,
Name = score.PlayerName,
TotalT = score.GameT + exp.Ext,
TotalZ = score.GameZ + exp.Exz
};
return new ObservableCollection<Total>(query);
That will be more efficient by iterating over all the experiences to start with, collecting them by ID, then iterating over the scores, matching each score with the collection of associated experiences. Basically it turns an O(M * N) operation into an O(M + N) operation.
Maybe I'm mistaken, but wouldn't be a good idea to observe the observable collections and collect totals in real time and accumulate results somewhere instead of trying to address the issue all at once?
It's all about implementing observer/observable pattern. Since you can subscribe to collection changes, you can do stuff whenever the collection change. You can also implement INotifyPropertyChanged on Experience.ExT and Experience.ExZ and subscribe to changes of every property from all objects.
This way, you don't need to work on hundred of objects but you just show what's has been accumulated during some period of time.
I've written the following code to set the properties on various classes. It works, but one of my new year's rsolutions is to make as much use of LINQ as possible and obviously this code doesn't. Is there a way to rewrite it in a "pure LINQ" format, preferably without using the foreach loops? (Even better if it can be done in a single LINQ statement - substatements are fine.)
I tried playing around with join but that didn't get me anywhere, hence I'm asking for an answer to this question - preferably without an explanation, as I'd prefer to "decompile" the solution to figure out how it works. (As you can probably guess I'm currently a lot better at reading LINQ than writing it, but I intend to change that...)
public void PopulateBlueprints(IEnumerable<Blueprint> blueprints)
{
XElement items = GetItems();
// item id => name mappings
var itemsDictionary = (
from item in items
select new
{
Id = Convert.ToUInt32(item.Attribute("id").Value),
Name = item.Attribute("name").Value,
}).Distinct().ToDictionary(pair => pair.Id, pair => pair.Name);
foreach (var blueprint in blueprints)
{
foreach (var material in blueprint.Input.Keys)
{
if (itemsDictionary.ContainsKey(material.Id))
{
material.Name = itemsDictionary[material.Id];
}
else
{
Console.WriteLine("m: " + material.Id);
}
}
if (itemsDictionary.ContainsKey(blueprint.Output.Id))
{
blueprint.Output.Name = itemsDictionary[blueprint.Output.Id];
}
else
{
Console.WriteLine("b: " + blueprint.Output.Id);
}
}
}
Definition of the requisite classes follow; they are merely containers for data and I've stripped out all the bits irrelevant to my question:
public class Material
{
public uint Id { get; set; }
public string Name { get; set; }
}
public class Product
{
public uint Id { get; set; }
public string Name { get; set; }
}
public class Blueprint
{
public IDictionary<Material, uint> Input { get; set; }
public Product Output { get; set; }
}
I don't think this is actually a good candidate for conversion to LINQ - at least not in its current form.
Yes, you have a nested foreach loop - but you're doing something else in the top-level foreach loop, so it's not the easy-to-convert form which just contains nesting.
More importantly, the body of your code is all about side-effects, whether that's writing to the console or changing the values within the objects you've found. LINQ is great when you've got a complicated query and you want to loop over that to act on each item in turn, possibly with side-effects... but your queries aren't really complicated, so you wouldn't get much benefit.
One thing you could do is give Blueprint and Product a common interface containing Id and Name. Then you could write a single method to update the products and blueprints via itemsDictionary based on a query for each:
UpdateNames(itemsDictionary, blueprints);
UpdateNames(itemsDictionary, blueprints.SelectMany(x => x.Input.Keys));
...
private static void UpdateNames<TSource>(Dictionary<string, string> idMap,
IEnumerable<TSource> source) where TSource : INameAndId
{
foreach (TSource item in source)
{
string name;
if (idMap.TryGetValue(item.Id, out name))
{
item.Name = name;
}
}
}
This is assuming you don't actually need the console output. If you do, you could always pass in the appropriate prefix and add an "else" block in the method. Note that I've used TryGetValue instead of performing two lookups on the dictionary for each iteration.
I'll be honest, I did not read your code. For me, your question answered itself when you said "code to set the properties." You should not be using LINQ to alter the state of objects / having side effects. Yes, I know that you could write extension methods that would cause that to happen, but you'd be abusing the functional paradigm poised by LINQ, and possibly creating a maintenance burden, especially for other developers who probably won't be finding any books or articles supporting your endeaver.
As you're interested in doing as much as possible with Linq, you might like to try the VS plugin ReSharper. It will identify loops (or portions of loops) that can be converted to Linq operators. It does a bunch of other helpful stuff with Linq too.
For example, loops that sum values are converted to use Sum, and loops that apply an internal filter are changed to use Where. Even string concatenation or other recursion on an object is converted to Aggregate. I've learned more about Linq from trying the changes it suggests.
Plus ReSharper is awesome for about 1000 other reasons as well :)
As others have said, you probably don't want to do it without foreach loops. The loops signify side-effects, which is the whole point of the exercise. That said, you can still LINQ it up:
var materialNames =
from blueprint in blueprints
from material in blueprint.Input.Keys
where itemsDictionary.ContainsKey(material.Id)
select new { material, name = itemsDictionary[material.Id] };
foreach (var update in materialNames)
update.material.Name = update.name;
var outputNames =
from blueprint in blueprints
where itemsDictionary.ContainsKey(blueprint.Output.Id)
select new { blueprint, name = itemsDictionary[blueprint.Output.Id] };
foreach (var update in outputNames)
update.Output.Name = update.name;
What about this
(from blueprint in blueprints
from material in blueprint.Input.Keys
where itemsDictionary.ContainsKey(material.Id)
select new { material, name = itemsDictionary[material.Id] })
.ToList()
.ForEach(rs => rs.material.Name = rs.name);
(from blueprint in blueprints
where itemsDictionary.ContainsKey(blueprint.Output.Id)
select new { blueprint, name = itemsDictionary[blueprint.Output.Id] })
.ToList()
.ForEach(rs => rs.blueprint.Output.Name = rs.name);
See if this works
var res = from blueprint in blueprints
from material in blueprint.Input.Keys
join item in items on
material.Id equals Convert.ToUInt32(item.Attribute("id").Value)
select material.Set(x=> { Name = item.Attribute("id").Value; });
You wont find set method, for that there is an extension method created.
public static class LinqExtensions
{
/// <summary>
/// Used to modify properties of an object returned from a LINQ query
/// </summary>
public static TSource Set<TSource>(this TSource input,
Action<TSource> updater)
{
updater(input);
return input;
}
}
I have a Report Interface which has a Run method.
There are different types of reports which implement this interface and each run their own kind of report, getting data from different tables.
Each report, using its own data context, gets data which then populates Business Objects with and at the moment they are returned as an array (I would like to be able to at least return something like a list but because you have to define the list type it makes it a bit more difficult).
Reflection is then used to find out the properties of the returned data.
I hope I have explained this well enough!
Is there a better way of doing this?
By request:
public interface IReport
{
int CustomerID { get; set; }
Array Run();
}
public class BasicReport : IReport
{
public int CustomerID { get; set; }
public virtual Array Run()
{
Array result = null;
using (BasicReportsDataContext brdc = new BasicReportsDataContext())
{
var queryResult = from j in brdc.Jobs
where j.CustomerID == CustomerID
select new JobRecord
{
JobNumber = j.JobNumber,
CustomerName = c.CustomerName
};
result = queryResult.ToArray();
}
}
}
The other class then does a foreach over the data, and uses reflection to find out the field names and values and puts that in an xml file.
As it stands everything works - I just can't help thinking there is a better way of doing it - that perhaps my limited understanding of C# doesn't allow me to see yet.
Personnally I would first ask myself if I Really need an interface. It would be the case if the classes implementing it are Really different by nature (not only by report kind).
If not, i.e all the implementing classes are basically "Reporters", then yes, there is a more convenient way to do this which is :
Writing a parent abstract Report
Having a virtual Run method and the CustomerID accessor
inheriting your "Reporter" classes from it
I have a problem with how the List Sort method deals with sorting. Given the following element:
class Element : IComparable<Element>
{
public int Priority { get; set; }
public string Description { get; set; }
public int CompareTo(Element other)
{
return Priority.CompareTo(other.Priority);
}
}
If I try to sort it this way:
List<Element> elements = new List<Element>()
{
new Element()
{
Priority = 1,
Description = "First"
},
new Element()
{
Priority = 1,
Description = "Second"
},
new Element()
{
Priority = 2,
Description = "Third"
}
};
elements.Sort();
Then the first element is the previously second element "Second". Or, in other words, this assertion fails:
Assert.AreEqual("First", elements[0].Description);
Why is .NET reordering my list when the elements are essentially the same? I'd like for it to only reorder the list if the comparison returns a non-zero value.
From the documentation of the List.Sort() method from MSDN:
This method uses Array.Sort, which uses the QuickSort algorithm. This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
Here's the link:
http://msdn.microsoft.com/en-us/library/b0zbh7b6.aspx
Essentially, the sort is performing as designed and documented.
Here is an extension method SortStable() for List<T> where T : IComparable<T>:
public static void SortStable<T>(this List<T> list) where T : IComparable<T>
{
var listStableOrdered = list.OrderBy(x => x, new ComparableComparer<T>()).ToList();
list.Clear();
list.AddRange(listStableOrdered);
}
private class ComparableComparer<T> : IComparer<T> where T : IComparable<T>
{
public int Compare(T x, T y)
{
return x.CompareTo(y);
}
}
Test:
[Test]
public void SortStable()
{
var list = new List<SortItem>
{
new SortItem{ SortOrder = 1, Name = "Name1"},
new SortItem{ SortOrder = 2, Name = "Name2"},
new SortItem{ SortOrder = 2, Name = "Name3"},
};
list.SortStable();
Assert.That(list.ElementAt(0).SortOrder, Is.EqualTo(1));
Assert.That(list.ElementAt(0).Name, Is.EqualTo("Name1"));
Assert.That(list.ElementAt(1).SortOrder, Is.EqualTo(2));
Assert.That(list.ElementAt(1).Name, Is.EqualTo("Name2"));
Assert.That(list.ElementAt(2).SortOrder, Is.EqualTo(2));
Assert.That(list.ElementAt(2).Name, Is.EqualTo("Name3"));
}
private class SortItem : IComparable<SortItem>
{
public int SortOrder { get; set; }
public string Name { get; set; }
public int CompareTo(SortItem other)
{
return SortOrder.CompareTo(other.SortOrder);
}
}
In the test method, if you call Sort() method instead of SortStable(), you can see that the test would fail.
You told it how to compare things and it did. You should not rely on internal implementation of Sort in your application. That's why it let's you override CompareTo. If you want to have a secondary sort parameter ("description" in this case), code it into your CompareTo. Relying on how Sort just happens to work is a great way to code in a bug that is very difficult to find.
You could find a stable quicksort for .NET or use a merge sort (which is already stable).
See the other responses for why List.Sort() is unstable. If you need a stable sort and are using .NET 3.5, try Enumerable.OrderBy() (LINQ).
You can fix this by adding an "index value" to your structure, and including that in the CompareTo method when Priority.CompareTo returns 0. You would then need to initialize the "index" value before doing the sort.
The CompareTo method would look like this:
public int CompareTo(Element other)
{
var ret = Priority.CompareTo(other.Priority);
if (ret == 0)
{
ret = Comparer<int>.Default.Compare(Index, other.Index);
}
return ret;
}
Then instead of doing elements.Sort(), you would do:
for(int i = 0; i < elements.Count; ++i)
{
elements[i].Index = i;
}
elements.Sort();
In some applications, when a list of items is sorted according to some criterion, preserving the original order of items which compare equal is unnecessary. In other applications, it is necessary. Sort methods which preserve the arrangement of items with matching keys (called "stable sorts" are generally either much slower than those which do not ("unstable sorts"), or else they require a significant amount of temporary storage (and are still somewhat slower). The first "standard library" sort routine to become widespread was probably the qsort() function included in the standard C library. That library would frequently have been used to sort lists that were large relative to the total amount of memory available. The library would have been much less useful if it had required an amount of temporary storage proportional to the number of items in the array to be sorted.
Sort methods that will be used under frameworks like Java or .net could practically make use of much more temporary storage than would have been acceptable in a C qsort() routine. A temporary memory requirement equal to the size of the array to be sorted would in most cases not pose any particular problem. Nonetheless, since it's been traditional for libraries to supply a Quicksort implementation, that seems to be the pattern followed by .net.
If you wanted to sort based on two fields instead of one you could do this:
class Element : IComparable<Element>
{
public int Priority { get; set; }
public string Description { get; set; }
public int CompareTo(Element other)
{
if (Priority.CompareTo(other.Priority) == 0)
{
return Description.CompareTo(other.Description);
} else {
return Priority.CompareTo(other.Priority);
}
}
}
Obviously, this doesn't satisfy the requirement of a stable search algorithm; However, it does satisfy your assertion, and allows control of your element order in the event of equality.
How would you implement a capacity-limited, generic MruList in C# or Java?
I want to have a class that represents a most-recently-used cache or list (= MruList). It should be generic, and limited to a capacity (count) specified at instantiation. I'd like the interface to be something like:
public interface IMruList<T>
{
public T Store(T item);
public void Clear();
public void StoreRange(T[] range);
public List<T> GetList();
public T GetNext(); // cursor-based retrieval
}
Each Store() should put the item at the top (front?) of the list. The GetList() should return all items in an ordered list, ordered by most recent store. If I call Store() 20 times and my list is 10 items long, I only want to retain the 10 most-recently Stored items. The GetList and StoreRange is intended to support retrieval/save of the MruList on app start and shutdown.
This is to support a GUI app.
I guess I might also want to know the timestamp on a stored item. Maybe. Not sure.
Internally, how would you implement it, and why?
(no, this is not a course assignment)
Couple of comments about your approach
Why have Store return T? I know what I just added, returning it back to me is un-necessary unless you explicitly want method chaining
Refactor GetNext() into a new class. It represents a different set of functionality (storage vs. cursor traversal) and should be represented by a separate interface. It also has usability concerns as what happens when two different methods active on the same stack want to traverse the structure?
GetList() should likely return IEnumerable<T>. Returning List<T> either forces an explicit copy up front or returns a pointer to an underlying implementation. Neither is a great choice.
As for what is the best structure to back the interface. It seems like the best to implement is to have a data structure which is efficient at adding to one end, and removing from the other. A doubly linked list would suit this nicely.
Here's a Cache class that stores objects by the time they were accessed. More recent items bubble to the end of the list. The cache operates off an indexer property that takes an object key. You could easily replace the internal dictionary to a list and reference the list from the indexer.
BTW, you should rename the class to MRU as well :)
class Cache
{
Dictionary<object, object> cache = new Dictionary<object, object>();
/// <summary>
/// Keeps up with the most recently read items.
/// Items at the end of the list were read last.
/// Items at the front of the list have been the most idle.
/// Items at the front are removed if the cache capacity is reached.
/// </summary>
List<object> priority = new List<object>();
public Type Type { get; set; }
public Cache(Type type)
{
this.Type = type;
//TODO: register this cache with the manager
}
public object this[object key]
{
get
{
lock (this)
{
if (!cache.ContainsKey(key)) return null;
//move the item to the end of the list
priority.Remove(key);
priority.Add(key);
return cache[key];
}
}
set
{
lock (this)
{
if (Capacity > 0 && cache.Count == Capacity)
{
cache.Remove(priority[0]);
priority.RemoveAt(0);
}
cache[key] = value;
priority.Remove(key);
priority.Add(key);
if (priority.Count != cache.Count)
throw new Exception("Capacity mismatch.");
}
}
}
public int Count { get { return cache.Count; } }
public int Capacity { get; set; }
public void Clear()
{
lock (this)
{
priority.Clear();
cache.Clear();
}
}
}
I would have an internal ArrayList and have Store() delete the last element if its size exceeds the capacity established in the constructor. I think standard terminology, strangely enough, calls this an "LRU" list, because the least-recently-used item is what gets discarded. See wikipedia's entry for this.
You can build this up with a Collections.Generic.LinkedList<T>.
When you push an item into a full list, delete the last one and insert the new one at the front. Most operations should be in O(1) which is better than a array-based implementation.
Everyone enjoys rolling their own container classes.
But in the .NET BCL there is a little gem called SortedList<T>. You can use this to implement your MRU list or any other priority-queue type list. It uses an efficient tree structure for efficient additions.
From SortedList on MSDN:
The elements of a SortedList object
are sorted by the keys either
according to a specific IComparer
implementation specified when the
SortedList is created or according to
the IComparable implementation
provided by the keys themselves. In
either case, a SortedList does not
allow duplicate keys.
The index sequence is based on the
sort sequence. When an element is
added, it is inserted into SortedList
in the correct sort order, and the
indexing adjusts accordingly. When an
element is removed, the indexing also
adjusts accordingly. Therefore, the
index of a specific key/value pair
might change as elements are added or
removed from the SortedList object.
Operations on a SortedList object tend
to be slower than operations on a
Hashtable object because of the
sorting. However, the SortedList
offers more flexibility by allowing
access to the values either through
the associated keys or through the
indexes.
Elements in this collection can be
accessed using an integer index.
Indexes in this collection are
zero-based.
In Java, I'd use the LinkedHashMap, which is built for this sort of thing.
public class MRUList<E> implements Iterable<E> {
private final LinkedHashMap<E, Void> backing;
public MRUList() {
this(10);
}
public MRUList(final int maxSize) {
this.backing = new LinkedHashMap<E,Void>(maxSize, maxSize, true){
private final int MAX_SIZE = maxSize;
#Override
protected boolean removeEldestEntry(Map.Entry<E,Void> eldest){
return size() > MAX_SIZE;
}
};
}
public void store(E item) {
backing.put(item, null);
}
public void clear() {
backing.clear();
}
public void storeRange(E[] range) {
for (E e : range) {
backing.put(e, null);
}
}
public List<E> getList() {
return new ArrayList<E>(backing.keySet());
}
public Iterator<E> iterator() {
return backing.keySet().iterator();
}
}
However, this does iterate in exactly reverse order (i.e. LRU first, MRU last). Making it MRU-first would require basically reimplementing LinkedHashMap but inserting new elements at the front of the backing list, instead of at the end.
Java 6 added a new Collection type named Deque... for Double-ended Queue.
There's one in particular that can be given a limited capacity: LinkedBlockingDeque.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.LinkedBlockingDeque;
public class DequeMruList<T> implements IMruList<T> {
private LinkedBlockingDeque<T> store;
public DequeMruList(int capacity) {
store = new LinkedBlockingDeque<T>(capacity);
}
#Override
public void Clear() {
store.clear();
}
#Override
public List<T> GetList() {
return new ArrayList<T>(store);
}
#Override
public T GetNext() {
// Get the item, but don't remove it
return store.peek();
}
#Override
public T Store(T item) {
boolean stored = false;
// Keep looping until the item is added
while (!stored) {
// Add if there's room
if (store.offerFirst(item)) {
stored = true;
} else {
// No room, remove the last item
store.removeLast();
}
}
return item;
}
#Override
public void StoreRange(T[] range) {
for (T item : range) {
Store(item);
}
}
}