C# - Linq optimize code with List and Where clause - c#

I have a following code:
var tempResults = new Dictionary<Record, List<Record>>();
errors = new List<Record>();
foreach (Record record in diag)
{
var code = Convert.ToInt16(Regex.Split(record.Line, #"\s{1,}")[4], 16);
var cond = codes.Where(x => x.Value == code && x.Active).FirstOrDefault();
if (cond == null)
{
errors.Add(record);
continue;
}
var min = record.Datetime.AddSeconds(downDiff);
var max = record.Datetime.AddSeconds(upDiff);
//PROBLEM PART - It takes around 4,5ms
var possibleResults = cas.Where(x => x.Datetime >= min && x.Datetime <= max).ToList();
if (possibleResults.Count == 0)
errors.Add(record);
else
{
if (!CompareCond(record, possibleResults, cond, ref tempResults, false))
{
errors.Add(record);
}
}
}
variable diag is List of Record
variable cas is List of Record with around 50k items.
The problem is that it's too slowly. The part with the first where clause needs around 4,6599ms, e.g. for 3000 records in List diag it makes 3000*4,6599 = 14 seconds. Is there any option to optimize the code?

You can speed up that specific statement you emphasized
cas.Where(x => x.Datetime >= min && x.Datetime <= max).ToList();
With binary search over cas list. First pre-sort cas by Datetime:
cas.Sort((a,b) => a.Datetime.CompareTo(b.Datetime));
Then create comparer for Record which will compare only Datetime properties (implementation assumes there are no null records in the list):
private class RecordDateComparer : IComparer<Record> {
public int Compare(Record x, Record y) {
return x.Datetime.CompareTo(y.Datetime);
}
}
Then you can translate your Where clause like this:
var index = cas.BinarySearch(new Record { Datetime = min }, new RecordDateComparer());
if (index < 0)
index = ~index;
var possibleResults = new List<Record>();
// go backwards, for duplicates
for (int i = index - 1; i >= 0; i--) {
var res = cas[i];
if (res.Datetime <= max && res.Datetime >= min)
possibleResults.Add(res);
else break;
}
// go forward until item bigger than max is found
for (int i = index; i < cas.Count; i++) {
var res = cas[i];
if (res.Datetime <= max &&res.Datetime >= min)
possibleResults.Add(res);
else break;
}
Idea is to find first record with Datetime equal or greater to your min, with BinarySearch. If exact match is found - it returns index of matched element. If not found - it returns negative value, which can be translated to the index of first element greater than target with ~index operation.
When we found that element, we can just go forward the list and grab items until we find item with Datetime greater than max (because list is sorted). We need to go a little backwards also, because if there are duplicates - binary search will not necessary return the first one, so we need to go backwards for potential duplicates.
Additional improvements might include:
Putting active codes in a Dictionary (keyed by Value) outside of for loop, and thus replacing codes Where search with Dictionary.ContainsKey.
As suggested in comments by #Digitalsa1nt - parallelize foreach loop, using Parallel.For, PLINQ, or any similar techniques. It's a perfect case for parallelization, because loop contains only CPU bound work. You need to make a little adjustments to make it thread-safe of course, such as using thread-safe collection for errors (or locking around adding to it).

Try adding AsNoTracking in the list
The AsNoTracking method can save both execution times and memory usage. Applying this option really becomes important when we retrieve a large amount of data from the database.
var possibleResults = cas.Where(x => x.Datetime >= min && x.Datetime <= max).AsNoTracking().ToList(); //around 4,6599ms

There a few improvements you can make here.
It might only be a minor performance increase but you should try using groupby instead of where in this circumstance.
So instead you should have something like this:
cas.GroupBy(x => x.DateTime >= min && x.DateTime <= max).Select(h => h.Key == true);
This ussually works for seaching through lists for distinct values, but in you case I'm unsure if it will provide you any benefit when using a clause.
Also a few other things you can do throughout you code:
Avoid using ToList when possible and stick to IEnumerable. ToList performs an eager evaluation which is probably causing a lot of slowdown in your query.
use .Any() instead of Count when checking if values exist (This only applies if the list is IEnumerable)

Related

How to Order By or Sort an integer List and select the Nth element

I have a list, and I want to select the fifth highest element from it:
List<int> list = new List<int>();
list.Add(2);
list.Add(18);
list.Add(21);
list.Add(10);
list.Add(20);
list.Add(80);
list.Add(23);
list.Add(81);
list.Add(27);
list.Add(85);
But OrderbyDescending is not working for this int list...
int fifth = list.OrderByDescending(x => x).Skip(4).First();
Depending on the severity of the list not having more than 5 elements you have 2 options.
If the list never should be over 5 i would catch it as an exception:
int fifth;
try
{
fifth = list.OrderByDescending(x => x).ElementAt(4);
}
catch (ArgumentOutOfRangeException)
{
//Handle the exception
}
If you expect that it will be less than 5 elements then you could leave it as default and check it for that.
int fifth = list.OrderByDescending(x => x).ElementAtOrDefault(4);
if (fifth == 0)
{
//handle default
}
This is still some what flawed because you could end up having the fifth element being 0. This can be solved by typecasting the list into a list of nullable ints at before the linq:
var newList = list.Select(i => (int?)i).ToList();
int? fifth = newList.OrderByDescending(x => x).ElementAtOrDefault(4);
if (fifth == null)
{
//handle default
}
Without LINQ expressions:
int result;
if(list != null && list.Count >= 5)
{
list.Sort();
result = list[list.Count - 5];
}
else // define behavior when list is null OR has less than 5 elements
This has a better performance compared to LINQ expressions, although the LINQ solutions presented in my second answer are comfortable and reliable.
In case you need extreme performance for a huge List of integers, I'd recommend a more specialized algorithm, like in Matthew Watson's answer.
Attention: The List gets modified when the Sort() method is called. If you don't want that, you must work with a copy of your list, like this:
List<int> copy = new List<int>(original);
List<int> copy = original.ToList();
The easiest way to do this is to just sort the data and take N items from the front. This is the recommended way for small data sets - anything more complicated is just not worth it otherwise.
However, for large data sets it can be a lot quicker to do what's known as a Partial Sort.
There are two main ways to do this: Use a heap, or use a specialised quicksort.
The article I linked describes how to use a heap. I shall present a partial sort below:
public static IList<T> PartialSort<T>(IList<T> data, int k) where T : IComparable<T>
{
int start = 0;
int end = data.Count - 1;
while (end > start)
{
var index = partition(data, start, end);
var rank = index + 1;
if (rank >= k)
{
end = index - 1;
}
else if ((index - start) > (end - index))
{
quickSort(data, index + 1, end);
end = index - 1;
}
else
{
quickSort(data, start, index - 1);
start = index + 1;
}
}
return data;
}
static int partition<T>(IList<T> lst, int start, int end) where T : IComparable<T>
{
T x = lst[start];
int i = start;
for (int j = start + 1; j <= end; j++)
{
if (lst[j].CompareTo(x) < 0) // Or "> 0" to reverse sort order.
{
i = i + 1;
swap(lst, i, j);
}
}
swap(lst, start, i);
return i;
}
static void swap<T>(IList<T> lst, int p, int q)
{
T temp = lst[p];
lst[p] = lst[q];
lst[q] = temp;
}
static void quickSort<T>(IList<T> lst, int start, int end) where T : IComparable<T>
{
if (start >= end)
return;
int index = partition(lst, start, end);
quickSort(lst, start, index - 1);
quickSort(lst, index + 1, end);
}
Then to access the 5th largest element in a list you could do this:
PartialSort(list, 5);
Console.WriteLine(list[4]);
For large data sets, a partial sort can be significantly faster than a full sort.
Addendum
See here for another (probably better) solution that uses a QuickSelect algorithm.
This LINQ approach retrieves the 5th biggest element OR throws an exception WHEN the list is null or contains less than 5 elements:
int fifth = list?.Count >= 5 ?
list.OrderByDescending(x => x).Take(5).Last() :
throw new Exception("list is null OR has not enough elements");
This one retrieves the 5th biggest element OR null WHEN the list is null or contains less than 5 elements:
int? fifth = list?.Count >= 5 ?
list.OrderByDescending(x => x).Take(5).Last() :
default(int?);
if(fifth == null) // define behavior
This one retrieves the 5th biggest element OR the smallest element WHEN the list contains less than 5 elements:
if(list == null || list.Count <= 0)
throw new Exception("Unable to retrieve Nth biggest element");
int fifth = list.OrderByDescending(x => x).Take(5).Last();
All these solutions are reliable, they should NEVER throw "unexpected" exceptions.
PS: I'm using .NET 4.7 in this answer.
Here there is a C# implementation of the QuickSelect algorithm to select the nth element in an unordered IList<>.
You have to put all the code contained in that page in a static class, like:
public static class QuickHelpers
{
// Put the code here
}
Given that "library" (in truth a big fat block of code), then you can:
int resA = list.QuickSelect(2, (x, y) => Comparer<int>.Default.Compare(y, x));
int resB = list.QuickSelect(list.Count - 1 - 2);
Now... Normally the QuickSelect would select the nth lowest element. We reverse it in two ways:
For resA we create a reverse comparer based on the default int comparer. We do this by reversing the parameters of the Compare method. Note that the index is 0 based. So there is a 0th, 1th, 2th and so on.
For resB we use the fact that the 0th element is the list-1 th element in the reverse order. So we count from the back. The highest element would be the list.Count - 1 in an ordered list, the next one list.Count - 1 - 1, then list.Count - 1 - 2 and so on
Theorically using Quicksort should be better than ordering the list and then picking the nth element, because ordering a list is on average a O(NlogN) operation and picking the nth element is then a O(1) operation, so the composite is O(NlogN) operation, while QuickSelect is on average a O(N) operation. Clearly there is a but. The O notation doesn't show the k factor... So a O(k1 * NlogN) with a small k1 could be better than a O(k2 * N) with a big k2. Only multiple real life benchmarks can tell us (you) what is better, and it depends on the size of the collection.
A small note about the algorithm:
As with quicksort, quickselect is generally implemented as an in-place algorithm, and beyond selecting the k'th element, it also partially sorts the data. See selection algorithm for further discussion of the connection with sorting.
So it modifies the ordering of the original list.

Check is IEnumerable<T> has 5 or more matches

I have an IEnumerable that where T is a complex object. I need to check and see if there are 5 or more items in the list that match a lambda expression. Currently I am using something like this:
if(myList.Count(c=> c.PropertyX == desiredX && c.Y != undesiredY) >= 5)...
However, I myList grows to containing 10K+ objects this becomes a huge bottle neck and more than likely it will have found a match in the first 100 items (but I can't make that assumption).
How can I do this as efficiently as possible.
You can use a Where to filter, then Skip the first 4 matches and use Any which will stop iterating once it hits the 5th match. The case where there are less than 5 matches will still have to iterate the entire list though.
if(myList.Where(c=> c.PropertyX == desiredX && c.Y != undesiredY).Skip(4).Any())
How about iterating though the list using a plain old for loop?:
int count = 0;
for (int i = 0; i < myList.Count; ++i)
{
if (myList[i].PropertyX == desiredX && myList[i].Y != undesiredY)
count++;
if (count == 5)
break;
}
This should be pretty much be as fast as it gets on a single thread. Since you can't make any assumption where in the list these items may be, the time complexity of the algorithm won't be better than be O(n) where n is the number of items in the list, i.e. in the worst case scenario you may have to iterate through the entire list. And there is no faster way of iterating through a list than using a for loop that I know of :)
You can use Skip 4 elements and check if your collections has any other elements. In this way you willl not count the whole elements in the collection.
var result = myList.Where(c=>c.PropertyX == desiredX && c.Y != undesiredY);
if(result.Skip(4).Any())
{
//has >= 5 elements.
}

C# - Get all combinations of defined length (or less) from a list of objects

I have a List<Object>. I have an int maxSize. I want a List<List<Object>> containing all combinations of objects of maxSize (this lists are maxSize, not the objects), or less.
I've seen solutions to problems that looks like my problem, except they aren't my problem and evidently my brain isn't smart enough to figure it out. I've also already spent 3 days on my overall problem trying to recurse myself to oblivion, so I'm just looking for solutions that just work at that point.
Requirements:
It has to return a List<List<Object>>, not anything else.
It has to take a List<Object> and int as arguments. If it needs more arguments for recursion or whatever, it's fine. The first two args are still a List<Object> and an int, not anything else.
Each List<Object> can only be of maxSize, or less. The "or less" requirement is optional, I can work without it.
Order does not matter. {1,2} equals {2,1}.
Duplicates are not allowed. If it contains {1,2}, it cannot contain {2,1}.
Here's a slight modification to my answer to this question:
Clean algorithm to generate all sets of the kind (0) to (0,1,2,3,4,5,6,7,8,9)
static IEnumerable<List<T>> Subsets<T>(List<T> objects, int maxLength) {
if (objects == null || maxLength <= 0)
yield break;
var stack = new Stack<int>(maxLength);
int i = 0;
while (stack.Count > 0 || i < objects.Count) {
if (i < objects.Count) {
if (stack.Count == maxLength)
i = stack.Pop() + 1;
stack.Push(i++);
yield return (from index in stack.Reverse()
select objects[index]).ToList();
} else {
i = stack.Pop() + 1;
if (stack.Count > 0)
i = stack.Pop() + 1;
}
}
}
Example of usage:
var numbers = new List<int>{1,2,3,4,5};
foreach (var subset in Subsets(numbers, 3)) {
Console.WriteLine(string.Join("", subset));
}
If you need a List<List<int>>, just call ToList() on the result.

Get subset x elements before and x elements after LINQ

I have a list and i need a subset of it which has the 5 previous elements to the one currently in the for loop and 5 elements after, so in total a list of 10 elements. Ignoring the current item in the loop.
I am currently achieving this as follows:
var currentIndex = myList.ClassName.FindIndex(a => a.Id == plate.Id);
var fromIndex = currentIndex - 5;
if (fromIndex < 0) fromIndex = 0;
var toIndex = currentIndex + 5;
if ((myList.ClassName.ElementAtOrDefault(toIndex) == null))
toIndex = myList.ClassName.Count - 1;
var subsetList = myList.ClassName.GetRange(fromIndex, (11));
comparisonPlates.RemoveAt(currentIndex);
However i am sure there is a much better and more efficient way of doing this using LINQ, any guidance?
I would use Skip and Take so you have all your elements surrounding your current index (and the current index).
To remove the current index, either add a RemoveAt; or use several Skip/Take (Skip/Take to take the elements before yours, and Skip/Take to take the elements after)
With a sample :
const int currentIndex = 12;
const int nbElements = 5;
List<string> results = items.Skip(currentIndex - nbElements).Take(nbElements).Concat(items.Skip(currentIndex + 1).Take(nbElements)).ToList();
I'd suggest you use LINQ to generate indices:
var subset = Enumerable.Range(currentIndex - 5, 5)
.Concat(Enumerable.Range(currentIndex + 1, 5))
.SkipWhile(index => index < 0)
.TakeWhile(index => index < items.Count)
.Select(index => items[index])
;
This can be more efficient than the way using items.Skip operation, because items.Skip(n) will internally do IEnumerator.MoveNext through n elements one by one, that is, the greater your currentIndex, the less efficient.
Of course SkipWhile or TakeWhile in the code above is slightly inefficient for the same reason, but it totally loops always only 10 times.
If you hate that inefficiency, you can calculate indices and counts (parameters of Enumerable.Range) beforehand and eliminate those.
(In my opinion, my code above seems more readable.)
In addition, Runtime Complexity of the indexer([]) of List is O(1), that means items[index] takes constant time regardless of index value or the size of your List.
Here is a solution that I feel is a little more readable.
var numElements = 5;
var fromIndex = currentIndex <= numElements ? 0 : currentIndex - numElements - 1;
var toIndex = myList.Count() - currentIndex <= numElements ? myList.Count() : currentIndex + numElements;
var subsetList = myList.Skip(fromIndex).Take(toIndex - fromIndex);
EDIT:
I'd suggest using Ripple's answer because of the performance reasons mentioned. I wasn't aware of the implications of skip/take, but after looking it up, it does make sense. For a small list of items, it won't matter, but it would with a sizable amount of data.

LINQ Query Determine Input is in List Boundaries?

I have a List of longs from a DB query. The total number in the List is always an even number, but the quantity of items can be in the hundreds.
List item [0] is the lower boundary of a "good range", item [1] is the upper boundary of that range. A numeric range between item [1] and item [2] is considered "a bad range".
Sample:
var seekset = new SortedList();
var skd= 500;
while( skd< 1000000 )
{
seekset.Add(skd, 0);
skd = skd+ 100;
}
If an input number is compared to the List items, if the input number is between 500-600 or 700-800 it is considered "good", but if it is between 600-700 it is considered "bad".
Using the above sample, can anyone comment on the right/fast way to determine if the number 655 is a "bad" number, ie not within any good range boundary (C#, .NET 4.5)?
If a SortedList is not the proper container for this (eg it needs to be an array), I have no problem changing, the object is static (lower case "s") once it is populated but can be destroyed/repopulated by other threads at any time.
The following works, assuming the list is already sorted and both of each pair of limits are treated as "good" values:
public static bool IsGood<T>(List<T> list, T value)
{
int index = list.BinarySearch(value);
return index >= 0 || index % 2 == 0;
}
If you only have a few hundred items then it's really not that bad. You can just use a regular List and do a linear search to find the item. If the index of the first larger item is even then it's no good, if it's odd then it's good:
var index = data.Select((n, i) => new { n, i })
.SkipWhile(item => someValue < item.n)
.First().i;
bool isValid = index % 2 == 1;
If you have enough items that a linear search isn't desirable then you can use a BinarySearch to find the next largest item.
var searchValue = data.BinarySearch(someValue);
if (searchValue < 0)
searchValue = ~searchValue;
bool isValid = searchValue % 2 == 1;
I am thinking that LINQ may not be best suited for this problem because IEnumerable forgets about item[0] when it is ready to process item[1].
Yes, this is freshman CS, but the fastest in this case may be just
// untested code
Boolean found = false;
for(int i=0; i<seekset.Count; i+=2)
{
if (valueOfInterest >= seekset[i] &&
valueOfInterest <= seekset[i+1])
{
found = true;
break; // or return;
}
}
I apologize for not directly answering your question about "Best approach in Linq", but I sense that you are really asking about best approach for performance.

Categories

Resources