Insert into ObservableCollection per int comparison - c#

Say I have an ObservableCollection with two items:
0: dateUnix: 333
1: dateUnix: 222
Now I want to add a new Item:
dateUnix: 300
If I just were to use the .add() method, the item would get added at the end. But I want the item to be inserted between 222 and 300 since this would make the list sorted.
How do I insert an item at a certain position where it is less then item value after and higher then item value before?

Of the top, I can think of two ways of doing this.
One would be, as was pointed out in the comments, to just insert and sort afterwards.
Another, more complex and more rewarding way would be to find the index of the first item greater or lesser than the one you're inserting and insert it at that index. Your list seems to be sorted in descending order, so it'd need to be the first lesser than.
You could achieve this using LINQ:
ObservableCollection<Int> collection = new ObservableCollection(new List<int>{333,222}); // == [333,222]
Int toInsert = 300;
collection.Insert(collection.IndexOf(collection.First(elem => elem < toInsert)), toInsert); // output == [333,300,222]
See this Fiddle for a working example.

If your collection is already sorted, just find the appropriate index to insert the element at (either via a linear or the faster binary search) and use Insert to store the element at that specific index.

Related

Foreach loop skip existing item

Here is my code:
int k = panel.Controls.OfType<DataGridView>().Count<DataGridView>();
foreach (Control control in panel.Controls.OfType<DataGridView>())
{
panel.Controls.Remove(control);
}
I have 4 DataGridView objects on panel that are created at runtime with the names 0, 1, 2, and 3, and "k" its shown correct value(4). But my foreach loop's first step is "0", second is "2", and then the loop ends. It skips two object and I don't know why.
If I put a second and third foreach statement, the result is correct.
Why is that so?
In the first loop your Control is Zero.
You remove the control.
Your next loop gets the next Control. Since its already returned the first control, it now returns the second Control, but since you've removed the Zero control the second Control is now 2.
The key to understanding this behaviour is that the OfType() returns an iterator, not a list. If you returned OfType().ToList() you would get a concrete list that would not be changed when you alter the list you derived it from.
So;
IList<object> x = underlyingList.OfType<object>() returns an iterator.
List<object> y = underlyingList.OfType<object>().ToList() returns a concrete list.
When you delete objects in the list you're iterating over you also alter its length.
First iteration you're at index 0 and length 4.
Second iteration is at index 1 (originally this item was index 2) and length 3.
Then your loop will terminate because you're at index 2 but there's no element at that index anymore because the ones who were there are now at index 0 and 1.
If you want to remove all elements this way you can iterate over the list backwards instead, this way you won't offset the list and making you miss the elements.

Add to list then sort vs FindLastIndex then insert

I have a List of items sorted a property. i want to add another item to the list and still get a sorted list. i see 2 simple ways to do this but which is faster/ better?
Add item to list then Sort the list OR
Use FindLastIndex then Insert item on index + 1
Or is there another way that i do not know of?
Here is some interesting facts:
Option 1: 1) Inserting all items in list. Just inserting will take
O(1). 2) Sort the list by some sorting algorithm. Fastest worst case
is O(n log n).
Option 2: 1) Find. Using Binary search, worst case is O(log n), but not your case :) (Your
case using FindLastIndex which is a predicate: This method is an
O(n) operation, where n is the Length of array.) 2) Insert the number
would be O(1).
Basically if you want to add items just in the last index second option definitely is faster and the best option.
in my opinion I think
Use FindLastIndex then Insert item on index + 1
should be better because in resorting at least process will walk over the list once :)

Compare List of Integers and add/remove the rows in database using the difference in result LinqtoSQL

Currently I'm working on a project using LinqtoSql and I would like to get an simpler solution for my current problem.
Example:
Lets say I got a table named Example with three rows (with values 1,2,4)
Now in code(c#) I got these values as a list of Integer(lets name it lstExisting)
Now in my method I got another List of Integer ( say lstCurrent) with Integers values (1,2,3)
Now I want to compare the both the list and find the difference of integers and update the database, so according to my example a new row with value 3 should be added and the existing row with value 4 should be deleted.
PS:(the integer values will be always unique and will be 1,2,3,4)
Linq solutions will be preferable but I don't mind other easier solutions.
Thanks
You need to find new items and to be deleted items using Except like:
var newItems = lstCurrent.Except(lstExisting).ToList();
var toBeDeletedItems = lstExisting.Except(lstCurrent).ToList();
Later you can iterate each list and Add/Delete accordingly.
Try using Contains(). With having two lists, you can write something like this. What this does is it iterates over each item in your methods list and checks the original to see if its there.
var lstExisting = getExistingList();
var lstCurrent = getCurrentList();
foreach(var currentInt in lstCurrent)
{
if(!lstExisting.Contains(currentInt))
{
//insert currentInt
}

Unusual hashset implementation: access a random element?

Background: In my program I have a list of nodes (a class I have defined). They each have a unique id number and a non-unique "region" number. I want to randomly select a node, record its id number, then remove all nodes of the same region from the list.
Problem: Someone pointed out to me that using a hashset instead of a list would be much faster, as a hashset's "order" is effectively random for my purposes and removing elements from it would be much faster. How would I do this (i.e. how do I access a random element in a hashset? I only know how to check to see if a hashset contains an element I already have)?
Also, I'm not quite sure how to remove all the nodes of a certain region. Do I have to override/define a comparison function to compare node regions? Again, I know how to remove a known element from a hashset, but here I don't know how to remove all nodes of a certain region.
I can post specifics about my code if that would help.
To be clear, the order items in a HashSet isn't random, it's just not easily determinable. Meaning if you iterate a hash set multiple times the items will be in the same order each time, but you have no control over what order they're in.
That said, HastSet<T> implements IEnumerable<T> so you could just pick a random number n and remove the nth item:
// assuming a Random object is defined somewhere (do not declare it here)
n = rand.Next(hashSet.Count);
var item = hashSet.ElementAt(n);
hashSet.Remove(item);
Also, I'm not quite sure how to remove all the nodes of a certain region. Do I have to override/define a comparison function to compare node regions?
Not necessarily - you'll need to scan the hashSet to find matching items (easily done with Linq) and remove each one individually. Whether you do that by just comparing properties or defining an equality comparer is up to you.
foreach (var dupe in hashSet.Where(x => x.Region == item.Region).ToList())
hashSet.Remove(dupe);
Note the ToList which is necessary since you can't modify a collection while iterating over it, so the items to remove need to be stored in a different collection.
Note that you can't override Equals in the Node class for this purpose or you won't be able to put multiple nodes from one region in the hash set.
If you haven't noticed, both of these requirements defeat the purpose of using a HashSet - A HashSet is faster only when looking for a known item; iterating or looking for items based on properties is no faster than a regular collection. It would be like looking through the phone book to find all people whose phone number start with 5.
If you always want the items organized by region, then perhaps a Dictionary<int, List<Node>> is a better structure.
There's another alternative approach that you could take that could end up being faster than removals from hash sets, and that's creating a structure that does your job for you in one go.
First up, to give me some sample data I'm running this code:
var rnd = new Random();
var nodes =
Enumerable
.Range(0, 10)
.Select(n => new Node() { id = n, region = rnd.Next(0, 3) })
.ToList();
That gives me this kind of data:
Now I build up my structure like this:
var pickable =
nodes
.OrderBy(n => rnd.Next())
.ToLookup(n => n.region, n => n.id);
Which gives me this:
Notice how the regions and individual ids are randomized in the lookup. Now it's possible to iterate over the lookup and take just the first element of each group to get both a random region and random node id without the need to remove any items from a hash set.
I wouldn't expect performance to be too much of an issue as I just tried this with 1,000,000 nodes with 1,000 regions and got a result back in just over 600ms.
On a hashset you can use ElementAt
notreallrandomObj nrrbase = HS.ElementAt(0);
int region = nrrbase.region;
List<notreallrandomObj> removeItems = new List<notreallrandomObj>();
foreach (notreallrandomObj nrr in HS.Where(x => x.region == region))
removeItems.Add(nrr);
foreach (notreallrandomObj nrr in removeItems)
HS.Remove(nrr);
Not sure if you can remove in the loop.
You may need to build up the remove list.
Yes remove O(1) on a HashSet but that does not mean it will be faster than a List. You don't even have a solution and are optimizing. That is premature optimization.
With a List you can just use RemoveAll
ll.RemoveAll(x => x.region == region);

Fastest method of collection searching by DateTime

I have a Dictionary containing 10 keys, each with a list containing up to 30,000 values. The values contain a DateTime property.
I frequently need to extract a small subset of one of the keys, like a date range of 30 - 60 seconds.
Doing this is easy, but getting it to run fast is not so. What would be the most efficient way to query this in-memory data?
Thanks a lot.
Sort lists by date at the first, then find your required items by binary search (i.e k item) and return them, finding the searched item is O(log(n)) because you need find first and last index. returning them is O(K) in all It's O(K+log(n))
IEnumerable<item> GetItems(int startIndex, int endIndex, List<item> input)
{
for (int i=startIndex;i<endIndex;i++)
yield return input[i];
}
1) Keep the dictionary, but use SortedList instead of a list for value of dictionaries, sorted by DateTime property
2) Implement a binary search to find the upper and lower edges in your range in the sorted list which gives you indexes.
3) Just select values in the range using Sortedlist.Values.Skip(lowerIndex).Take(upperIndex - lowerIndex)
In reply to Aliostad: I don't think bsearch will not work if the list of the collection is a linked list. It still takes O(n)
the fastest way will be to organize the data so it is indexed by the thing you want to search on. Currently you have it indexed by key, but you want to search by date. I think you would be best indexing it by date, if that is what you want to be able to search on.
I would keep 2 dictionaries, one indexed as you do now and one where the items are indexed by date. i would decide on a time frame (say 1 minute) and add each object to a list based on the minute it happens in and then add each list to the dictionary under the key of that minute. then when you want the data for a particular time frame, generate the relevant minute(s) and get the list(s) from the dictionary. This relies on you being able to know the key in the other dictionary from the objects though.

Categories

Resources