Keys.PageDown.ToString() returns Next - c#

I've read a few articles on this issue. Basically PageDown and PageUp are linked to Next and Prior respectively, for backwards compatibility. The problem with this is there's no reliable way to get the wanted values out (atleast none that I can see).
See here for a good explanation. Quite old though, I thought something might have been done to address this by now.
At present there are two options I can see;
Enum.GetNames(typeof (Keys)).GetValue(e.KeyValue);
This returns "Prior" for "PageUp" but "PageDown" for "PageDown".
e.KeyCode.ToString();
This returns "PageUp" for "PageUp" but "Next for "PageDown".
I could handle it manually, but what if there's another instance like this?
Does anyone have a better solution?

Perhaps the best thing to do is to create a lookup table to translate the enum values.
You could implement the lookup table with a dictionary to map the enum values onto strings, and if the dictionary doesn't contain the enum value fall back to Enum.ToString() to get the value. That way you only need to add the exceptions (such as PageUp and PageDown) to the dictionary.
(Note that if you are displaying these strings to the user and you want to internationalize the strings you will probably need to add translated entries for most of the strings.)

Related

Replacement .net Dictionary

Given (Simplified description)
One of our services has a lot of instances in memory. About 85% are unique.
We need a very fast key based access to these items as they are queried very often in a single stack / call. This single context is extremely performance optimized.
So we started to put them them into a dictionary. The performance was ok.
Access to the items as fast as possible is the most important thing in this case. It is ensured that there are no write operations when reads occur.
Problem
In the meanwhile we hit the limits of the number of items a dictionary can store.
Die Arraydimensionen haben den unterstützten Bereich überschritten.
bei System.Collections.Generic.Dictionary`2.Resize(Int32 newSize, Boolean forceNewHashCodes)
bei System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
Which translates to The array dimensions have exceeded the supported range.
Solutions like Memcached are in this specific case just too slow. It is a isolated very specific use case encapsulated in a single service
So we are looking for a replacement of the dictionary for this specific scenario.
Currently I can't find one supporting this. Am I missing something? Can someone point me to one?
As an alternative, if none exists we are thinking about implementing one by ourselves.
We thought about two possibilities. Build it up from scratch or wrapping multiple dictionaries.
Wrapping multiple dictionaries
When an item is searched we could have a look at the keys HasCode and use its starting number like an index for a list of wrappers dictionaries. Although this seems to be easy it smells to me and it would mean that the hashcode is calculated twice (one time by us one time by the inner dictionary) (this scenario is really really performance cruical).
I know that exchanging a basetype like the dictionary is the absolute last possibility and I want to avoid it. But currently it looks like there is no way to make the objects more unique or to get the performance of a dictionary from a database or to save performance somewhere else.
I'm also aware of "be aware of optimizations" but the a lower performance would very badly hit the business requirements behind it.
Before I finished reading your questions, the simple multiple dictionaries came to my mind. But you know this solution already. I am assuming you are really hitting the maximum number of items in a dictionary, not any other limit.
I would say go for it. I do not think you should be worried about counting a hash twice. If they keys are somehow long and getting the hash is really a time consuming operations (which I doubt, but can't be sure as you did not mention what are the keys), you do not need to use whole keys for your hash function. Just pick up whatever part you are OK to process in your own hashing and distribute the item based on that.
The only thing you need to make sure here is to have an evenly spread of items among your multiple dictionaries. How hard is to achieve this really depends on what your keys are. If they were completely random numbers, you could just use the first byte and it would be fine (unless you would need more than 256 dictionaries). If they are not random numbers, you have to think about the distribution in their domain and code your first hash function in a way it achieves that goal of even distribution.
I've looked at the implementation of the .Net Dictionary and it seems like you should be able to store 2^32 values in your dictionary. (Next to the list of buckets, which are themselves linked lists there is a single array that stores all items, probably for quick iteration, that might be the limiting factor).
If you haven't added 2^32 values it might be that there is a limit on the items in a bucket (its a linked list so its probably limitted to the maximum stackframe size). In that case you should double check that your hashing function spreads the items evenly over the dictionary. See this answer for more info What is the best algorithm for an overridden System.Object.GetHashCode?

What type should i use for "database-like" behaviour in C#?

i'm currently working on an experimental setup, that is used to write complex microstructures into glass with a femtosecond laser.
The output power of the laser is regulated by a filterwheel which i control from my (C# console)application. As I initially do not know the position of the wheel, I need to initalize it on startup, by measuring the power for a predefined number of points on the wheel.
This information (power values and their corresponding position on the wheel) should be stored during runtime. So basically if a certain output power is requested, the controller will look up the two points in between which the desired value can be found and then increments the position until it is reached.
This is something i would usually achieve using a database. As the initialization takes place on every startup and it does not need to be persisted, i would probably prefer to just keep it as an in-memory list.
So my question is:
Is it possible to somehow "index" the power values to retrieve them quickly?
A Dictionary<int, int> would probably be your best bet. Of course, you could switch out the key/value types to match your data if it isn't ints.
You may look at using a SortedDictionary<int, int> if you're going to have to calculate "in-between" values for keys.
Look at the similar question here for an example on finding points between two keys using a SortedDictionary
Some time ago I wrote a small post on the different list types in dotnet with pros and cons.
http://www.selfelected.com/list-of-list-and-collection-classes-in-dotnet-11-45/
if you want to map each key to a value you should use Dictionary<key,value>

Fast way to check for existence and then insert into a SortedList

Whenever I want to insert into a SortedList, I check to see if the item exists, then I insert. Is this performing the same search twice? Once to see if the item is there and again to find where to insert the item? Is there a way to optimize this to speed it up or is this just the way to do it, no changes necessary?
if( sortedList.ContainsKey( foo ) == false ){
sortedList.Add( foo, 0 );
}
You can add the items to a HashSet and the List, searching in the hash set is the fastest way to see if you have to add the value to the list.
if( hashSet.Contains( foo ) == false ){
sortedList.Add( foo, 0 );
hashSet.Add(foo);
}
You can use the indexer. The indexer does this in an optimal way internally by first looking for the index corresponding to the key using a binary search and then using this index to replace an existing item. Otherwise a new item is added by taking in account the index already calculated.
list["foo"] = value;
No exception is thrown whether the key already exists or not.
UPDATE:
If the new value is the same as the old value, replacing the old value will have the same effect than doing nothing.
Keep in mind that a binary search is done. This means that it takes about 10 steps to find an item among 1000 items! log2(1000) ~= 10. Therefore doing an extra search will not have a significant impact on speed. Searching among 1,000,000 items will only double this value (~ 20 steps).
But setting the value through the indexer will do only one search in any case. I looked at the code using Reflector and can confirm this.
I'm sorry if this doesn't answer your question, but I have to say sometimes the default collection structures in .NET are unjustifiably limited in features. This could have been handled if Add method returned a boolean indicating success/failure very much like HashSet<T>.Add does. So everything goes in one step. In fact the whole of ICollection<T>.Add should have been a boolean so that implementation-wise it's forced, very much like Collection<T> in Java does.
You could either use a SortedDictionary<K, V> structure as pointed out by Servy or a combination of HashSet<K> and SortedList<K, V> as in peer's answer for better performance, but neither of them are really sticking to do it only once philosophy. I tried a couple of open source projects to see if there is a better implementation in this respect, but couldn't find.
Your options:
In vast majority of the cases it's ok to do two lookups, doesn't hurt much. Stick to one. There is no solution built in.
Write your own SortedList<K, V> class. It's not difficult at all.
If you'r desperate, you can use reflection. The Insert method is a private member in SortedList class. An example that does.. Kindly dont do it. It's a very very poor choice. Mentioned here for completeness.
ContainsKey does a binary search, which is O(log n), so unless you list is massive, I wouldn't worry about it too much. And, presumably, on insertion it does another binary search to find the location to insert at.
One option to avoid this (doing the search twice) is to use a the BinarySearch method of List. This will return a negative value if the item isn't found and that negative value is the bitwise compliment of the place where the item should be inserted. So you can look for an item, and if it's not already in the list, you know exactly where it should be inserted to keep the list sorted.
SortedList<Key,Value> is a slow data structure that you probably shouldn't use at all. You may have already considered using SortedDictionary<Key,Value> but found it inconvenient because the items don't have indexes (you can't write sortedDictionary[0]) and because you can write a find nearest key operation for SortedList but not SortedDictionary.
But if you're willing to switch to a third-party library, you can get better performance by changing to a different data structure.
The Loyc Core libraries include a data type that works the same way as SortedList<Key,Value> but is dramatically faster when the list is large. It's called BDictionary<Key,Value>.
Now, answering your original question: yes, the way you wrote the code, it performs two searches and one insert (the insert is the slowest part). If you switch to BDictionary, there is a method bdictionary.AddIfNotPresent(key, value) which combines those two operations into a single operation. It returns true if the specified item was added, or false if it was already present.

PeekRange on a stack in C#?

I have a program that needs to store data values and periodically get the last 'x' data values.
It initially thought a stack is the way to go but I need to be able to see more than just the top value - something like a PeekRange method where I can peek the last 'x' number of values.
At the moment I'm just using a list and get the last, say, 20 values like this:
var last20 = myList.Skip(myList.Count - 20).ToList();
The list grows all the time the program runs, but I only ever want the last 20 values. Could someone give some advice on a better data structure?
I'd probably be using a ring buffer. It's not hard to implement one on your own, AFAIK there's no implementation provided by the Framework..
Well since you mentioned the stack, I guess you only need modifications at the end of the list?
In that case the list is actually a nice solution (cache efficient and with fast insertion/removal at the end). However your way of extracting the last few items is somewhat inefficient, because IEnumerable<T> won't expose the random access provided by the List. So the Skip()-Implementation has to scan the whole List until it reaches the end (or do a runtime type check first to detect that the container implements IList<T>). It is more efficient, to either access the items directly by index, or (if you need a second array) to use List<T>.CopyTo().
If you need fast removal/insertion at the beginning, you may want to consider a ring buffer or (doubly) linked list (see LinkedList<T>). The linked list will be less cache-efficient, but it is easy and efficient to navigate and alter from both directions. The ring buffer is a bit harder to implement, but will be more cache- and space-efficient. So its probably better if only small value types or reference types are stored. Especially when the buffers size is fixed.
You could just removeat(0) after each add (if the list is longer than 20), so the list will never be longer than 20 items.
You said stack, but you also said you only ever want the last 20 items. I don't think these two requirements really go together.
I would say that Johannes is right about a ring buffer. It is VERY easy to implement this yourself in .NET; just use a Queue<T> and once you reach your capacity (20) start dequeuing (popping) on every enqueue (push).
If you want your PeekRange to enumerate from the most recent to least recent, you can defineGetEnumerator to do somehing likereturn _queue.Reverse().GetEnumerator();
Woops, .Take() wont do it.
Here's an implementation of .TakeLast()
http://www.codeproject.com/Articles/119666/LINQ-Introducing-The-Take-Last-Operators.aspx

Follow up: How do get some of the object from a list without Linq?

I have a question about this question. I posted a reply there but since it's been marked as answered, I don't think I'll get a response to my post there.
I am running C# framework 2.0 and I
would like to get some of the data
from a list? The list is a List<>. How
can I do that without looping and
doing comparaison manually on each
element of the List<>?
It really looks like the answers are just a more elegant ways of comparing every element of the List. Given that the list is not guaranteed to be sorted prior to the search, do any of the methods provided in the original post ensure that they are looking at a smaller subset of the original list?
EDIT: One thing to note is that I'm not trying to do anything here. I just want to know if the solutions provided in another question truly do what the OP asked, with regards to looping through the whole list. In general, to search an unsorted list (at least it's not required given the data structure), you will have to search the entire list. However, do any of the solutions on the other thread have an underlying optimization to prevent searching the entire list?
EDIT: I really didn't get any answers that were all that helpful but I will give credit to the answer that at least confirmed my common sense belief. If I notice a new answer that is better, I will change my vote.
If your requirement is to find things quickly in an arbitrary collection, then perhaps a list isn't the best data structure for the job. :)
You might want to check out LINQ support for .Net 2.0.
Has explain in the thread your mentionned you can get some of the object from the list without LINQ.
list = list.FindAll(yourFilterCriteria);
The object yourFilterCriteria is a Predicate and can do comparison with all Property or Function in your object so it's very customizable.
Predicate<SimpleObject> yourFilterCriteria = delegate(SimpleObject simpleObject)
{
return simpleObject.FirstName.Contains("Skeet") && simpleObject.Age < 30;
};
This example show you that you can search the list without looping manullay and you will get all people with the First Name Skeet and Age under 30.
If you're only looking for the first match, then the Find method will do the job. It won't loop through the entire list, rather it will return the first occurrence of the object. However, if you want to find all of them, how exactly do you expect to search through only a subset of the data if it isn't sorted?

Categories

Resources