How would you implement a capacity-limited, generic MruList in C# or Java?
I want to have a class that represents a most-recently-used cache or list (= MruList). It should be generic, and limited to a capacity (count) specified at instantiation. I'd like the interface to be something like:
public interface IMruList<T>
{
public T Store(T item);
public void Clear();
public void StoreRange(T[] range);
public List<T> GetList();
public T GetNext(); // cursor-based retrieval
}
Each Store() should put the item at the top (front?) of the list. The GetList() should return all items in an ordered list, ordered by most recent store. If I call Store() 20 times and my list is 10 items long, I only want to retain the 10 most-recently Stored items. The GetList and StoreRange is intended to support retrieval/save of the MruList on app start and shutdown.
This is to support a GUI app.
I guess I might also want to know the timestamp on a stored item. Maybe. Not sure.
Internally, how would you implement it, and why?
(no, this is not a course assignment)
Couple of comments about your approach
Why have Store return T? I know what I just added, returning it back to me is un-necessary unless you explicitly want method chaining
Refactor GetNext() into a new class. It represents a different set of functionality (storage vs. cursor traversal) and should be represented by a separate interface. It also has usability concerns as what happens when two different methods active on the same stack want to traverse the structure?
GetList() should likely return IEnumerable<T>. Returning List<T> either forces an explicit copy up front or returns a pointer to an underlying implementation. Neither is a great choice.
As for what is the best structure to back the interface. It seems like the best to implement is to have a data structure which is efficient at adding to one end, and removing from the other. A doubly linked list would suit this nicely.
Here's a Cache class that stores objects by the time they were accessed. More recent items bubble to the end of the list. The cache operates off an indexer property that takes an object key. You could easily replace the internal dictionary to a list and reference the list from the indexer.
BTW, you should rename the class to MRU as well :)
class Cache
{
Dictionary<object, object> cache = new Dictionary<object, object>();
/// <summary>
/// Keeps up with the most recently read items.
/// Items at the end of the list were read last.
/// Items at the front of the list have been the most idle.
/// Items at the front are removed if the cache capacity is reached.
/// </summary>
List<object> priority = new List<object>();
public Type Type { get; set; }
public Cache(Type type)
{
this.Type = type;
//TODO: register this cache with the manager
}
public object this[object key]
{
get
{
lock (this)
{
if (!cache.ContainsKey(key)) return null;
//move the item to the end of the list
priority.Remove(key);
priority.Add(key);
return cache[key];
}
}
set
{
lock (this)
{
if (Capacity > 0 && cache.Count == Capacity)
{
cache.Remove(priority[0]);
priority.RemoveAt(0);
}
cache[key] = value;
priority.Remove(key);
priority.Add(key);
if (priority.Count != cache.Count)
throw new Exception("Capacity mismatch.");
}
}
}
public int Count { get { return cache.Count; } }
public int Capacity { get; set; }
public void Clear()
{
lock (this)
{
priority.Clear();
cache.Clear();
}
}
}
I would have an internal ArrayList and have Store() delete the last element if its size exceeds the capacity established in the constructor. I think standard terminology, strangely enough, calls this an "LRU" list, because the least-recently-used item is what gets discarded. See wikipedia's entry for this.
You can build this up with a Collections.Generic.LinkedList<T>.
When you push an item into a full list, delete the last one and insert the new one at the front. Most operations should be in O(1) which is better than a array-based implementation.
Everyone enjoys rolling their own container classes.
But in the .NET BCL there is a little gem called SortedList<T>. You can use this to implement your MRU list or any other priority-queue type list. It uses an efficient tree structure for efficient additions.
From SortedList on MSDN:
The elements of a SortedList object
are sorted by the keys either
according to a specific IComparer
implementation specified when the
SortedList is created or according to
the IComparable implementation
provided by the keys themselves. In
either case, a SortedList does not
allow duplicate keys.
The index sequence is based on the
sort sequence. When an element is
added, it is inserted into SortedList
in the correct sort order, and the
indexing adjusts accordingly. When an
element is removed, the indexing also
adjusts accordingly. Therefore, the
index of a specific key/value pair
might change as elements are added or
removed from the SortedList object.
Operations on a SortedList object tend
to be slower than operations on a
Hashtable object because of the
sorting. However, the SortedList
offers more flexibility by allowing
access to the values either through
the associated keys or through the
indexes.
Elements in this collection can be
accessed using an integer index.
Indexes in this collection are
zero-based.
In Java, I'd use the LinkedHashMap, which is built for this sort of thing.
public class MRUList<E> implements Iterable<E> {
private final LinkedHashMap<E, Void> backing;
public MRUList() {
this(10);
}
public MRUList(final int maxSize) {
this.backing = new LinkedHashMap<E,Void>(maxSize, maxSize, true){
private final int MAX_SIZE = maxSize;
#Override
protected boolean removeEldestEntry(Map.Entry<E,Void> eldest){
return size() > MAX_SIZE;
}
};
}
public void store(E item) {
backing.put(item, null);
}
public void clear() {
backing.clear();
}
public void storeRange(E[] range) {
for (E e : range) {
backing.put(e, null);
}
}
public List<E> getList() {
return new ArrayList<E>(backing.keySet());
}
public Iterator<E> iterator() {
return backing.keySet().iterator();
}
}
However, this does iterate in exactly reverse order (i.e. LRU first, MRU last). Making it MRU-first would require basically reimplementing LinkedHashMap but inserting new elements at the front of the backing list, instead of at the end.
Java 6 added a new Collection type named Deque... for Double-ended Queue.
There's one in particular that can be given a limited capacity: LinkedBlockingDeque.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.LinkedBlockingDeque;
public class DequeMruList<T> implements IMruList<T> {
private LinkedBlockingDeque<T> store;
public DequeMruList(int capacity) {
store = new LinkedBlockingDeque<T>(capacity);
}
#Override
public void Clear() {
store.clear();
}
#Override
public List<T> GetList() {
return new ArrayList<T>(store);
}
#Override
public T GetNext() {
// Get the item, but don't remove it
return store.peek();
}
#Override
public T Store(T item) {
boolean stored = false;
// Keep looping until the item is added
while (!stored) {
// Add if there's room
if (store.offerFirst(item)) {
stored = true;
} else {
// No room, remove the last item
store.removeLast();
}
}
return item;
}
#Override
public void StoreRange(T[] range) {
for (T item : range) {
Store(item);
}
}
}
Related
I need a set that is order which they were added just like a list.
The set may also be observable.
Any built.in set like this in .NET 4?
As far as I know, there is no such type in .NET. I recently needed this and ended up implementing it myself; it's not that difficult.
The trick is to combine a Dictionary<T, LinkedListNode<T>> with a LinkedList<T>. Use the dictionary to query keys and values in O(1) time and the list to iterate in insertion-order. You need a dictionary instead of a set because you want to be able to call LinkedList<T>.Remove(LinkedListNode<T>) and not LinkedList<T>.Remove(T). The former has O(1) time complexity, the latter O(n).
It sounds like you need ReadOnly Queue. In .Net we have built in Queue class but there is no built in ReadOnly Queue.
To make sure there is no Duplicate value you can use contains check
There is one Nuget package which has ImmutableQueue. Not sure if it can help you.
This creates new Queue object everytime when Enqueue or Dequeue operation is done.
https://msdn.microsoft.com/en-us/library/dn467186(v=vs.111).aspx
I guess you could use a SortedDictionary<> and a Dictionary<> together to do this.
Assuming that you are never going to do more than int.MaxValue insertions into the set, you can use an integer "sequence number" as a key into a SortedDictionary that keeps track of the inserted items in insertion order.
Alongside this you need to use a Dictionary to map items to the sequence number that was used to insert them.
Putting this together into a class and a demo program: (NOT THREAD SAFE!)
using System;
using System.Collections;
using System.Collections.Generic;
namespace Demo
{
public sealed class SequencedSet<T>: IEnumerable<T>
{
private readonly SortedDictionary<int, T> items = new SortedDictionary<int, T>();
private readonly Dictionary<T, int> order = new Dictionary<T, int>();
private int sequenceNumber = 0;
public void Add(T item)
{
if (order.ContainsKey(item))
return; // Or throw if you want.
order[item] = sequenceNumber;
items[sequenceNumber] = item;
++sequenceNumber;
}
public void Remove(T item)
{
if (!order.ContainsKey(item))
return; // Or throw if you want.
int sequence = order[item];
items.Remove(sequence);
order.Remove(item);
}
public bool Contains(T item)
{
return order.ContainsKey(item);
}
public IEnumerator<T> GetEnumerator()
{
return items.Values.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
internal class Program
{
private static void Main()
{
var test = new SequencedSet<string>();
test.Add("One");
test.Add("Two");
test.Add("Three");
test.Add("Four");
test.Add("Five");
test.Remove("Four");
test.Remove("Two");
foreach (var item in test)
Console.WriteLine(item);
}
}
}
This should be fairly performant for insertions and deletions, but it will take double the memory of course. If you are doing a great number of insertions and deletions you could use a long instead of an int for the sequence numbering.
Unfortunately, if you are doing more than 2^63 deletions, even that won't work - although I would imagine that should be more than enough...
I am working on an assignment that involves managing an inventory of products. Specifically, we're given an interface IProduct and IInventory to implement. Most of it is straightforward, but I've run into a design roadblock, and I'm wondering about best practices.
I have two choices for a backing field of my Inventory class: List or Dictionary (a custom class might be overkill). The assignment asks us to:
Write a method allowing users to add Items to your Inventory.
Disallow adding duplicate items (items with the same name).
Implement Indexers (so that myInv[myItemName] should return the Item corresponding to myItemName)
Write a method returning the list of items in alphabetical order by name.
Given these requirements, I was about to jump in and make the private field a dictionary, but then I saw the requirement:
Write a method returning the list of items in the order they are added to the inventory.
I am wondering what the best course of action in this scenario would be. I'm juggling between two ideas:
Create two private backing fields, a list and a dictionary, but that seems unwieldy and inelegant.
Use a list, and jump through a few hoops for the first four requirements (like writing a loop for the indexers, and making a sorted copy later when asked for alphabetical order).
Which of the above actions should I take, or should I do something completely different?
I'd implement it as a wrapper around a List<InventoryItem> (or whatever specific class represents the inventory item). Something like this:
public class Inventory {
private List<InventoryItem> inner = new List<InventoryItem>();
public void Add(InventoryItem item) {
if (inner.Exists(x => x.Name == item.name)) {
throw new ArgumentException("Duplicate item");
} else {
inner.Add(item)
}
}
public List<InventoryItem> OrderedByName() {
return inner.OrderBy(x => x.Name);
}
public List<InventoryItem> OrderedByDate() {
return inner;
}
public InventoryItem this[int i] {
get {
return inner[i]
}
}
}
I'm developing a multithread application with .Net 3.5 that reads records from different tables stored in a database. The readings are very frequent so there is the need of a lazy loading caching implementation. Every table is mapped to a C# class and has a string column that can be used as key in the cache.
In addition there is the requirement that periodically all the cached records should be refreshed.
I could have implemented the cache with a lock on every reading to ensure a thread-safe environment, but then I thought another solution that rely on the fact that it is simple to get a list of all possible keys.
So here is the first class I wrote, that stores the list of all the keys that is lazy loaded with the double checked lock pattern. It also has a method that stores in a static variable the timestamp of the last requested refresh.
public class Globals
{
private static object _KeysLock = new object();
public static volatile List<string> Keys;
public static void LoadKeys()
{
if (Keys == null)
{
lock (_KeysLock)
{
if (Keys == null)
{
List<string> keys = new List<string>();
// Filling all possible keys from DB
// ...
Keys = keys;
}
}
}
}
private static long refreshTimeStamp = DateTime.Now.ToBinary();
public static DateTime RefreshTimeStamp
{
get { return DateTime.FromBinary(Interlocked.Read(ref refreshTimeStamp)); }
}
public static void NeedRefresh()
{
Interlocked.Exchange(ref refreshTimeStamp, DateTime.Now.ToBinary());
}
}
Then I wrote the CacheItem<T> class that is the implementation of a single item of the cache for a specified table T filtered by the key. It has the Load method for the record list lazy-loading and the LoadingTimeStamp property that stores the timestamp of the last record loading. Please note that the static list of records is overwritten with the new one that is locally filled and then the LoadingTimeStamp is overritten too.
public class CacheItem<T>
{
private List<T> _records;
public List<T> Records
{
get { return _records; }
}
private long loadingTimestampTick;
public DateTime LoadingTimestamp
{
get { return DateTime.FromBinary(Interlocked.Read(ref loadingTimestampTick)); }
set { Interlocked.Exchange(ref loadingTimestampTick, value.ToBinary()); }
}
public void Load(string key)
{
List<T> records = new List<T>();
// Filling records from DB filtered on key
// ...
_records = records;
LoadingTimestamp = DateTime.Now;
}
}
And finally here is the Cache<T> class that stores the cache for the table T as a static Dictionary. As you can see the Get method first loads all possible keys in the cache if not already done and then checks the timestamps for the refresh (both are done with the double checked lock pattern). The list of record in the instance returned by a Get call can safely be read by a thread even if there is another thread that is doing the refresh inside the lock, because the refreshing thread does not modify the list itself but creates a new one.
public class Cache<T>
{
private static object _CacheSynch = new object();
private static Dictionary<string, CacheItem<T>> _Cache = new Dictionary<string, CacheItem<T>>();
private static volatile bool _KeysLoaded = false;
public static CacheItem<T> Get(string key)
{
bool checkRefresh = true;
CacheItem<T> item = null;
if (!_KeysLoaded)
{
lock (_CacheSynch)
{
if (!_KeysLoaded)
{
Globals.LoadKeys(); // Checks the lazy loading of the common key list
foreach (var k in Globals.Keys)
{
item = new CacheItem<T>();
if (k == key)
{
// As long as the lock is acquired let's load records for the requested key
item.Load(key);
// then the refresh is no more needed by the current thread
checkRefresh = false;
}
_Cache.Add(k, item);
}
_KeysLoaded = true;
}
}
}
// here the key is certainly contained in the cache
item = _Cache[key];
if (checkRefresh)
{
// let's check the timestamps to know if refresh is needed
DateTime rts = Globals.RefreshTimeStamp;
if (item.LoadingTimestamp < rts)
{
lock (_CacheSynch)
{
if (item.LoadingTimestamp < rts)
{
// refresh is needed
item.Load(key);
}
}
}
}
return item;
}
}
Periodically the Globals.NeedRefresh() is called to ensure the records will be refreshed.
This solution can avoid a lock on every reading of the cache because the cache is pre-filled with all the possible keys: this means that there will be in memory a number of instances that is equal to the number of all possible keys (about 20) for each requested type T (all the T types are about 100), but only for the requested keys the record lists are not empty.
Please let me know if this solution has some thread-safety issue or anything wrong.
Thank you very mutch.
Given that:
You load all keys once and never change them
You create each dictionary once and never change it
CacheItem.Load is thread-safe because it only ever replaces a private List<T> field with a new completely initialized list.
you don't need any locks at all, so could simplify the code.
The only possible need for a lock is to prevent concurrent attempts to run CacheItem.Load. Personally I'd just let the concurrent database accesses run, but if you want to prevent it, you could just implement a lock in CacheItem.Load. Or pinch Lazy<T> from .NET 4 and use that as suggested in my answer to your previous question.
Another comment is that your refresh logic uses DateTime.Now so won't behave as expected (a) during the period when the clocks go back at the end of Daylight Saving Time and (b) if the system clock is updated.
I would simply use a static integer value that is incremented each time NeedRefreshis called.
From comments:
What happens for example if two threads ... try to load the common Globals.Keys at the same time?"
There is a small risk that this may happen once at application startup, but so what? It will result in the 20 keys being read from the database twice, but the performance impact is likely to be negligible. And if you really want to prevent this, any locking could be encapsulated in a class like Lazy<T>.
The comment about using DateTime.Now is in fact a point of interest but I think maybe I can suppose those events can occur while the application is not in use.
You can "suppose" that, but not guarantee it. Your machine may decide to synchronize it's time with a time server at any time.
About the advice of using an integer in NeedRefresh I don't understand how can I compare it with each record list state which is represented by a DateTime.
As far as I can see, you only use the DateTime to check if your data was loaded before or after the most recent call to NeedRefresh. So you could replace this by:
public static class Globals
{
...
public static int Version { get {return _version; } }
private static int _version;
public static void NeedRefresh()
{
Interlocked.Increment(ref _version);
}
}
public class CacheItem<T>
{
public int Version {get; private set; }
...
public void Load(string key)
{
Version = Globals.Version;
List<T> records = new List<T>();
// Filling records from DB filtered on key
// ...
_records = records;
}
}
then when accessing the cache:
item = _Cache[key];
if (item.Version < Globals.Version) item.Load();
** UPDATE 2 **
In response to comment:
... There could be a real risk of integrity if one thread tries to read the Dictionary while another is adding items to it inside the lock, couldn't be?
Your existing code adds all keys to the dictionary once only immediately after loading the global keys, and subsequently never modifies the dictionary. So this is thread-safe as long as you don't assign the _Cache property until the dictionary is completely constructed:
var dictionary = new Dictionary<string, CacheItem<T>>(Global.Keys.Count);
foreach (var k in Globals.Keys)
{
dictionary.Add(k, new CacheItem<T>());
}
_Cache = dictionary;
I've got a collection of object A's. Each A has a field that correlates is to an object B - of which I have another collection. In other words, each B is attached to a subset of the collection of As (but only conceptually, not in code). This field - correlating A to B - can change during the life of the system. There are system requirements that prevent changing of this structure.
If I need to repeatedly perform operations on each B's set of A's, would it be better to repeated use the Where() method on the collection of A's or create another collection that B owns and a class that manages the add and remove of the relevant items.
Let me see if i can capture this in code:
class A {
public B owner;
...
}
class B {
...
}
class FrequentlyCalledAction {
public DoYourThing(B current) {
List<A> relevantItems = listOfAllAItems.Where(x => x.owner == current).ToList()
foreach (A item in relevantItems) {
...
}
}
}
Vs:
class A {
public B owner;
...
}
class B {
public List<A> itsItems;
}
class FrequentlyCalledAction {
public void DoYourThing(B current) {
foreach (A item in current.itsItems) {
...
}
}
}
class AManager {
public void moveItem(A item, B from, B to) {
from.itsItems.remove(item);
to.itsItems.add(item);
}
}
This primarily depends on the size of the sets. If there are only a few items the overhead that comes with the solution two is bigger than the performance gain.
In this case I would use solution one since it has a better readability and is less complicated to manage.
If there are thousands of items in the set I would go for solution two. The moveItems method is an O(n) operation but it seems that there are more reads than writes in your scenario. Therefore you gain more performance through the more structured design.
In fact it all depend of the size of your collection. Sol 2 is more complex but faster for big collection while sol1 can be very fast for less than 100/1000 items or so.
Since the sets are small (~100 items) and they change often (~every 4 iterations) do this, then see if you have a problem.
public DoYourThing(B current)
{
foreach(A item in listOfAllAItems.Where(a => a.owner == current))
{
...
}
}
I don't see any point in casting the IEnumrable<A> to an IList<A>.
If this gives you a performance problem I don't think AManager is your best answer, although this could depend on how much the relationships change.
If you go for solution 2 it might be worth using a HashSet rather than a List. A HashSet is O(1) for Add & Remove whereas a List is O(1) for Add and O(n) for remove.
Another option is this which has the advantage that users of A & B don't need to remember to use AManager:
static class GlobalDictionary
{
private static Dictionary<B,HashSet<A>> dictionary = new Dictionary<B,HashSet<A>>();
public static HashSet<A> this[B obj]
{
// You could remove the set and have it check to see if a Set exists for a B
// if not you would create it.
get {return dictionary[obj];}
set {dictionary[obj] = value;}
}
}
class A
{
private B owner;
public B Owner
{
get{ return owner;}
set
{
if (owner != null) GlobalDictionary[owner].Remove(this);
owner = value;
GlobalDictionary[owner].Add(this);
}
}
}
class B
{
public B()
{
GlobalDictionary[this] = new HashSet<A>();
}
public IEnumerable<A> Children
{
get
{
return GlobalDictionary[this];
}
}
}
I haven't tried this so it's likely it'll require some tweaks but you should get the idea.
As a result of another question I asked here I want to use a HashSet for my objects
I will create objects containing a string and a reference to its owner.
public class Synonym
{
private string name;
private Stock owner;
public Stock(string NameSynonym, Stock stock)
{
name=NameSynonym;
owner=stock
}
// [+ 'get' for 'name' and 'owner']
}
I understand I need a comparer , but never used it before. Should I create a separate class? like:
public class SynonymComparer : IComparer<Synonym>
{
public int Compare(Synonym One, Synonym Two)
{ // Should I test if 'One == null' or 'Two == null' ????
return String.Compare(One.Name, Two.Name, true); // Caseinsesitive
}
}
I prefer to have a function (or nested class [maybe a singleton?] if required) being PART of class Synonym instead of another (independent) class. Is this possible?
About usage:
As i never used this kind of thing before I suppose I must write a Find(string NameSynonym) function inside class Synonym, but how should I do that?
public class SynonymManager
{
private HashSet<SynonymComparer<Synonym>> ListOfSynonyms;
public SynonymManager()
{
ListOfSymnonyms = new HashSet<SynonymComparer<Synonym>>();
}
public void SomeFunction()
{ // Just a function to add 2 sysnonyms to 1 stock
Stock stock = GetStock("General Motors");
Synonym otherName = new Synonym("GM", stock);
ListOfSynonyms.Add(otherName);
Synonym otherName = new Synonym("Gen. Motors", stock);
ListOfSynonyms.Add(otherName);
}
public Synonym Find(string NameSynomym)
{
return ListOfSynonyms.??????(NameSynonym);
}
}
In the code above I don't know how to implement the 'Find' method. How should i do that?
Any help will be appreciated
(PS If my ideas about how it should be implemented are completely wrong let me know and tell me how to implement)
A HashSet doesn't need a IComparer<T> - it needs an IEqualityComparer<T>, such as
public class SynonymComparer : IEqualityComparer<Synonym>
{
public bool Equals(Synonym one, Synonym two)
{
// Adjust according to requirements.
return StringComparer.InvariantCultureIgnoreCase
.Equals(one.Name, two.Name);
}
public int GetHashCode(Synonym item)
{
return StringComparer.InvariantCultureIgnoreCase
.GetHashCode(item.Name);
}
}
However, your current code only compiles because you're creating a set of comparers rather than a set of synonyms.
Furthermore, I don't think you really want a set at all. It seems to me that you want a dictionary or a lookup so that you can find the synonyms for a given name:
public class SynonymManager
{
private readonly IDictionary<string, Synonym> synonyms = new
Dictionary<string, Synonym>();
private void Add(Synonym synonym)
{
// This will overwrite any existing synonym with the same name.
synonyms[synonym.Name] = synonym;
}
public void SomeFunction()
{
// Just a function to add 2 synonyms to 1 stock.
Stock stock = GetStock("General Motors");
Synonym otherName = new Synonym("GM", stock);
Add(otherName);
ListOfSynonyms.Add(otherName);
otherName = new Synonym("Gen. Motors", stock);
Add(otherName);
}
public Synonym Find(string nameSynonym)
{
// This will throw an exception if you don't have
// a synonym of the right name. Do you want that?
return synonyms[nameSynonym];
}
}
Note that there are some questions in the code above, about how you want it to behave in various cases. You need to work out exactly what you want it to do.
EDIT: If you want to be able to store multiple stocks for a single synonym, you effectively want a Lookup<string, Stock> - but that's immutable. You're probably best storing a Dictionary<string, List<Stock>>; a list of stocks for each string.
In terms of not throwing an error from Find, you should look at Dictionary.TryGetValue which doesn't throw an exception if the key isn't found (and also returns whether or not the key was found); the mapped value is "returned" in an out parameter.
Wouldn't it be more reasonable to scrap the Synonym class entirely and have list of synonyms to be a Dictonary (or, if there is such a thing, HashDictionary) of strings?
(I'm not very familiar with C# types, but I hope this conveys general idea)
The answer I recommend (edited, now respects the case):
IDictionary<string, Stock>> ListOfSynonyms = new Dictionary<string,Stock>>();
IDictionary<string, string>> ListOfSynForms = new Dictionary<string,string>>();
class Stock
{
...
Stock addSynonym(String syn)
{
ListOfSynForms[syn.ToUpper()] = syn;
return ListOfSynonyms[syn.ToUpper()] = this;
}
Array findSynonyms()
{
return ListOfSynonyms.findKeysFromValue(this).map(x => ListOfSynForms[x]);
}
}
...
GetStock("General Motors").addSynonym('GM').addSynonym('Gen. Motors');
...
try
{
... ListOfSynonyms[synonym].name ...
}
catch (OutOfBounds e)
{
...
}
...
// output everything that is synonymous to GM. This is mix of C# and Python
... GetStock('General Motors').findSynonyms()
// test if there is a synonym
if (input in ListOfSynonyms)
{
...
}
You can always use LINQ to do the lookup:
public Synonym Find(string NameSynomym)
{
return ListOfSynonyms.SingleOrDefault(x => x.Name == NameSynomym);
}
But, have you considered using a Dictionary instead, I believe it is better suited for extracting single members, and you can still guarantee that there are no duplicates based on the key you choose.
I am not sure that lookup time is of SingleOrDefault, but I am pretty sure it is linear (O(n)), so if lookup time is important to you, a Dictionary will provide you with O(1) lookup time.