I need to optimize the below foreach loop. The foreach loop is taken more time to get the unique items.
Instead can the FilterItems be converted into a list collection. If so how to do it. Then i will take unique items easily from it.
The problem arises when i have 5,00,000 items in FilterItems.
Please suggest some ways to optimize the below code:
int i = 0;
List<object> order = new List<object>();
List<object> unique = new List<object>();
// FilterItems IS A COLLECTION OF RECORDS. CAN THIS BE CONVERTED TO A LIST COLLECTION DIRECTLY, SO THAT I CAN TAKE THE UNIQUE ITEMS FROM IT.
foreach (Record rec in FilterItems)
{
string text = rec.GetValue(“Column Name”);
int position = order.BinarySearch(text);
if (position < 0)
{
order.Insert(-position - 1, text);
unique.Add(text);
}
i++;
}
It's unclear what you mean by "converting FilterItems into a list" when we don't know anything about it, but you could definitely consider sorting after you've got all the items, rather than as you go:
var strings = FilterItems.Select(record => record.GetValue("Column Name"))
.Distinct()
.OrderBy(x => x)
.ToList();
The use of Distinct() here will avoid sorting lots of equal items - it looks like you only want distinct items anyway.
If you want unique to be in the original order but order to be the same items, just sorted, you could use:
var unique = FilterItems.Select(record => record.GetValue("Column Name"))
.Distinct()
.ToList();
var order = unique.OrderBy(x => x).ToList();
Now Distinct() isn't guaranteed to preserve order - but it does so in the current implementation, and that's the most natural implementation, too.
Related
I have an ordered list of objects, and I would like to find the index of each item where a property changes, and get a dictionary/list of pairs matching index to property. For example, finding the index of each new first letter in a list of words ordered alphabetically.
I can do this with a foreach loop:
Initials = new Dictionary<char, int>();
int i = 0;
foreach (var word in alphabeticallyOrderedList))
{
if (!Initials.ContainsKey(word.First()))
{
Initials[word.First()] = i;
}
i++;
}
But I feel like there should be an elegant way of doing this with Linq.
You could have the same functionality with LINQ by using the overload of Select that exposes the index and by using GroupBy + ToDictionary:
Initials = alphabeticallyOrderedList
.Select((word, index) => new { Word = word, WordIndex = index })
.GroupBy(x => x.Word[0])
.ToDictionary(charGroup => charGroup.Key, charGroup => charGroup.First().WordIndex);
But to quote myself:
LINQ is not always more readable, especially when indexes are important. You also lose some debugging, exception handling and logging capabilities if you use a large LINQ query
I have 2 lists. First is a list of objects that has an int property ID. The other is a list of ints.
I need to compare these 2 lists and copy the objects to a new list with only the objects that matches between the two lists based on ID. Right now I am using 2 foreach loops as follows:
var matched = new list<Cars>();
foreach(var car in cars)
foreach(var i in intList)
{
if (car.id == i)
matched.Add(car);
}
This seems like it is going to be very slow as it is iterating over each list many times. Is there way to do this without using 2 foreach loops like this?
One slow but clear way would be
var matched = cars.Where(car => intList.Contains(car.id)).ToList();
You can make this quicker by turning the intList into a dictionary and using ContainsKey instead.
var intLookup = intList.ToDictionary(k => k);
var matched = cars.Where(car => intLookup.ContainsKey(car.id)).ToList();
Even better still, a HashSet:
var intHash = new HashSet(intList);
var matched = cars.Where(car => intHash.Contains(car.id)).ToList();
You could try some simple linq something like this should work:
var matched = cars.Where(w => intList.Contains(w.id)).ToList();
this will take your list of cars and then find only those items where the id is contained in your intList.
I wanted to ask for suggestions how I can simplify the foreach block below. I tried to make it all in one linq statement, but I couldn't figure out how to manipulate "count" values inside the query.
More details about what I'm trying to achieve:
- I have a huge list with potential duplicates, where Id's are repeated, but property "Count" is different numbers
- I want to get rid of duplicates, but still not to loose those "Count" values
- so for the items with the same Id I summ up the "Count" properties
Still, the current code doesn't look pretty:
var grouped = bigList.GroupBy(c => c.Id).ToList();
foreach (var items in grouped)
{
var count = 0;
items.Each(c=> count += c.Count);
items.First().Count = count;
}
var filtered = grouped.Select(y => y.First());
I don't expect the whole solution, pieces of ideas will be also highly appreciated :)
Given that you're mutating the collection, I would personally just make a new "item" with the count:
var results = bigList.GroupBy(c => c.Id)
.Select(g => new Item(g.Key, g.Sum(i => i.Count)))
.ToList();
This performs a simple mapping from the original to a new collection of Item instances, with the proper Id and Count values.
var filtered = bigList.GroupBy(c=>c.Id)
.Select(g=> {
var f = g.First();
f.Count = g.Sum(c=>c.Count);
return f;
});
I have an int array of ID's that are ordered properly. Then I have an an array of unordered objects that have ID properties.
I would like to order the objects by ID that match the order of the int array.
Something along the lines of
newObjectArray = oldObjectArray.MatchOrderBy(IdArray)
Would be most desirable
I feel like I should be able to accomplish this using LINQ but I have yet to find a way.
My current method doesn't seem very efficient since it has to query on every iteration of the collection. I suspect that performance can suffer for sufficiently large collections. Which eventually will happen.
Here is my current implementation:
//this is just dummy data to show you whats going on
int[] orderedIDs = new int[5] {5534, 5632, 2334, 6622, 2344};
MemberObject[] searchResults = MyMethodToGetSearchResults();
MemberObject[] orderedSearchResults = new MemberObject[orderedIDs.Count()];
for(int i = 0; i < orderedIDs.Count(); i++)
{
orderedSearchResults[i] = searchResults
.Select(memberObject => memberObject)
.Where(memberObject => memberObject.id == orderedIDs[i])
.FirstOrDefault();
}
A brute force implementation:
MemberObject[] sortedResults =
IdArray.Select(id => searchResults
.FirstOrDefault( item => item.id == id ))
However, this requires reiterating searchResults for every item in IdArray and doesn't deal too neatly with items that have duplicate ids.
Things improve if you make an ILookup of your search results, so that grabbing the correct search result for each item in IdArray is now O(1) time.
ILookup<int, MemberObject> resultLookup = searchResults.ToLookup(x => x.id);
Now:
MemberObject[] sortedResults =
IdArray.SelectMany(id => resultLookup[id])
I have a list of integers that contains a number of values (say, 200).
List<int> ExampleList;
And another list on integers that holds the indexes that need to be deleted from ExampleList. However, this list is not sorted.
List<int> RemoveFromExampleList;
If it were sorted, I would have run a reverse loop and deleted all the values like this:
for (int i = (RemoveFromExampleList.Count-1); i >=0; i--)
{
ExampleList.RemoveAt(RemoveFromExampleList[i]);
}
Do I have to sort RemoveFromExampleList, or is there another way to prune the unnecessary values from ExampleList?
If I do have to sort, whats the easiest way to sort? Is there any inbuilt C# library/method to sort?
If RemoveFromExampleList is a list of indexes, you would have to sort it and work in descending order to delete based on those indexes. Doing it any other way would cause you to delete values you don't mean to delete.
Here is the one liner.
ExampleList.RemoveAll(x => RemoveFromExampleList.Contains(ExampleList.IndexOf(x)));
You could replace the values you are going to remove with a sentinel value, i.e., one that you know doesn't occur in the list, and then remove all occurrences of that value.
Your option is to sort, yes. Sort the removal list in descending order and then remove by index that way.
// perform an orderby projection, remove
foreach (int index in RemoveFromExampleList.OrderByDescending(i => i)
ExampleList.RemoveAt(index);
Or
// actually sort the list, then remove
RemoveFromExampleList.Sort((a,b) => b.CompareTo(a));
foreach (int index in RemoveFromExampleList)
ExampleList.RemoveAt(index);
(Assumes there are no duplicates, use .Distinct() on the list/projection if otherwise.)
If you really had some aversion to sorting the list, you could make the list a list of nullable ints:
List<int?> ints;
Then you could nullify the values in the "delete list", and use the RemoveAll method to delete the null values.
But this is obviously a bit of a hack.
You could do it with LINQ / Lambda like so:
//EXAMPLE TO REMOVE ITEMS COMING FROM ANOTHER LIST
List masterList = new List();
masterList.Add(1);
masterList.Add(1);
masterList.Add(2);
masterList.Add(3);
List<int> itemsToRemove = new List<int>();
itemsToRemove.Add(1);
itemsToRemove.Add(2);
itemsToRemove.Add(3);
List<int> cleanList = new List<int>();
foreach (int value in itemsToRemove)
{
masterList = masterList.Where(x => x != value).ToList();
}