I'm trying to find indexes of items in a list.
For example number 0 in a list of numbers.
With this code I have found index of zeros only when there one zero in the list.
When zeros are two or more, the second index doesn't get correct.
Isn't method IndexOf() the correct one to use?
How can I find all the indexes of an item, not only the first?
Thanks
var zerosInList = list.FindAll(x => x == 0);
if (zerosInList.Count > 0)
{
foreach (var item in zerosInList) //finding indexes of zeros
{
indexes.Add(list.IndexOf(item));
Console.Write("found zero in position: "); PrintList(indexes);
}
The List<T>.IndexOf compares elements using the default equality comparer of T. For integers, it uses value equality. All zeros in your zerosInList collection are considered to be the same. In other words, the "second zero" or "third zero" in your foreach loop is considered no different than the "first zero", therefore the IndexOf method always returns the index of the first 0 it encounters, not the particular zero that's the subject of the foreach loop's current iteration.
You can make a collection of all the zero's indexes in your original list this way:
var indexesOfZeros = new List<int>();
for (int i = 0; i < list.Count; i++)
{
if (list[i] == 0)
{
indexesOfZeros.Add(i);
}
}
IndexOf method in C# searches one string for another. It returns the index of the string part, if one is found.
But you can try this:
for(int i=0;i<yourFirstList.Count;i++)
//yourFirstList is the list with all the numbers
{
if(yourFirstList.ElementAt(i).value==0){
indexes.Add(i);
}
}
Console.Write("found zero in position: "); PrintList(indexes);
You could also use the following Linq expression :
List<int> zeroList = intList.Select((val,idx) => new {val,idx})
.Where(t => t.val == 0)
.Select(pp => pp.idx).ToList();
the first creates an anonymous type containing the original value & offset, the where clause filters out all instances where the value is 0 & the final select returns the index in the original list.
I like Lambdas :D
List<int> indexes = list.Select((item, index) =>
item == 0 ? index : -1
).Where(i => i != -1).ToList();
Console.Write("found zero in position: "); PrintList(indexes);
List.FindAll method returns items that match your requisite, not their index. I mean if you have lets say
list[0] = 0;
list[1] = 4;
list[2] = 0;
and apply FindAll( x => x == 0); to list, you get a list which contains 2 zero values, and I think you expect indexes to contain 0 and 2 in this case. You could use .IndexOf() to accomplish your task, iterating through your items until you get a -1:
int indexOfItem = 0;
while(indexOfItem != -1 && indexOfItem < list.Count) {
indexOfItem = list.IndexOf(0, indexOfItem);
if (indexOfItem != -1) {
indexes.Add(indexOfItem);
indexOfItem++;
}
}
Related
I have a list, and I want to select the fifth highest element from it:
List<int> list = new List<int>();
list.Add(2);
list.Add(18);
list.Add(21);
list.Add(10);
list.Add(20);
list.Add(80);
list.Add(23);
list.Add(81);
list.Add(27);
list.Add(85);
But OrderbyDescending is not working for this int list...
int fifth = list.OrderByDescending(x => x).Skip(4).First();
Depending on the severity of the list not having more than 5 elements you have 2 options.
If the list never should be over 5 i would catch it as an exception:
int fifth;
try
{
fifth = list.OrderByDescending(x => x).ElementAt(4);
}
catch (ArgumentOutOfRangeException)
{
//Handle the exception
}
If you expect that it will be less than 5 elements then you could leave it as default and check it for that.
int fifth = list.OrderByDescending(x => x).ElementAtOrDefault(4);
if (fifth == 0)
{
//handle default
}
This is still some what flawed because you could end up having the fifth element being 0. This can be solved by typecasting the list into a list of nullable ints at before the linq:
var newList = list.Select(i => (int?)i).ToList();
int? fifth = newList.OrderByDescending(x => x).ElementAtOrDefault(4);
if (fifth == null)
{
//handle default
}
Without LINQ expressions:
int result;
if(list != null && list.Count >= 5)
{
list.Sort();
result = list[list.Count - 5];
}
else // define behavior when list is null OR has less than 5 elements
This has a better performance compared to LINQ expressions, although the LINQ solutions presented in my second answer are comfortable and reliable.
In case you need extreme performance for a huge List of integers, I'd recommend a more specialized algorithm, like in Matthew Watson's answer.
Attention: The List gets modified when the Sort() method is called. If you don't want that, you must work with a copy of your list, like this:
List<int> copy = new List<int>(original);
List<int> copy = original.ToList();
The easiest way to do this is to just sort the data and take N items from the front. This is the recommended way for small data sets - anything more complicated is just not worth it otherwise.
However, for large data sets it can be a lot quicker to do what's known as a Partial Sort.
There are two main ways to do this: Use a heap, or use a specialised quicksort.
The article I linked describes how to use a heap. I shall present a partial sort below:
public static IList<T> PartialSort<T>(IList<T> data, int k) where T : IComparable<T>
{
int start = 0;
int end = data.Count - 1;
while (end > start)
{
var index = partition(data, start, end);
var rank = index + 1;
if (rank >= k)
{
end = index - 1;
}
else if ((index - start) > (end - index))
{
quickSort(data, index + 1, end);
end = index - 1;
}
else
{
quickSort(data, start, index - 1);
start = index + 1;
}
}
return data;
}
static int partition<T>(IList<T> lst, int start, int end) where T : IComparable<T>
{
T x = lst[start];
int i = start;
for (int j = start + 1; j <= end; j++)
{
if (lst[j].CompareTo(x) < 0) // Or "> 0" to reverse sort order.
{
i = i + 1;
swap(lst, i, j);
}
}
swap(lst, start, i);
return i;
}
static void swap<T>(IList<T> lst, int p, int q)
{
T temp = lst[p];
lst[p] = lst[q];
lst[q] = temp;
}
static void quickSort<T>(IList<T> lst, int start, int end) where T : IComparable<T>
{
if (start >= end)
return;
int index = partition(lst, start, end);
quickSort(lst, start, index - 1);
quickSort(lst, index + 1, end);
}
Then to access the 5th largest element in a list you could do this:
PartialSort(list, 5);
Console.WriteLine(list[4]);
For large data sets, a partial sort can be significantly faster than a full sort.
Addendum
See here for another (probably better) solution that uses a QuickSelect algorithm.
This LINQ approach retrieves the 5th biggest element OR throws an exception WHEN the list is null or contains less than 5 elements:
int fifth = list?.Count >= 5 ?
list.OrderByDescending(x => x).Take(5).Last() :
throw new Exception("list is null OR has not enough elements");
This one retrieves the 5th biggest element OR null WHEN the list is null or contains less than 5 elements:
int? fifth = list?.Count >= 5 ?
list.OrderByDescending(x => x).Take(5).Last() :
default(int?);
if(fifth == null) // define behavior
This one retrieves the 5th biggest element OR the smallest element WHEN the list contains less than 5 elements:
if(list == null || list.Count <= 0)
throw new Exception("Unable to retrieve Nth biggest element");
int fifth = list.OrderByDescending(x => x).Take(5).Last();
All these solutions are reliable, they should NEVER throw "unexpected" exceptions.
PS: I'm using .NET 4.7 in this answer.
Here there is a C# implementation of the QuickSelect algorithm to select the nth element in an unordered IList<>.
You have to put all the code contained in that page in a static class, like:
public static class QuickHelpers
{
// Put the code here
}
Given that "library" (in truth a big fat block of code), then you can:
int resA = list.QuickSelect(2, (x, y) => Comparer<int>.Default.Compare(y, x));
int resB = list.QuickSelect(list.Count - 1 - 2);
Now... Normally the QuickSelect would select the nth lowest element. We reverse it in two ways:
For resA we create a reverse comparer based on the default int comparer. We do this by reversing the parameters of the Compare method. Note that the index is 0 based. So there is a 0th, 1th, 2th and so on.
For resB we use the fact that the 0th element is the list-1 th element in the reverse order. So we count from the back. The highest element would be the list.Count - 1 in an ordered list, the next one list.Count - 1 - 1, then list.Count - 1 - 2 and so on
Theorically using Quicksort should be better than ordering the list and then picking the nth element, because ordering a list is on average a O(NlogN) operation and picking the nth element is then a O(1) operation, so the composite is O(NlogN) operation, while QuickSelect is on average a O(N) operation. Clearly there is a but. The O notation doesn't show the k factor... So a O(k1 * NlogN) with a small k1 could be better than a O(k2 * N) with a big k2. Only multiple real life benchmarks can tell us (you) what is better, and it depends on the size of the collection.
A small note about the algorithm:
As with quicksort, quickselect is generally implemented as an in-place algorithm, and beyond selecting the k'th element, it also partially sorts the data. See selection algorithm for further discussion of the connection with sorting.
So it modifies the ordering of the original list.
I am attempting to loop through every combination of an array in C# dependent on size, but not order. For example: var states = ["NJ", "AK", "NY"];
Some Combinations might be:
states = [];
states = ["NJ"];
states = ["NJ","NY"];
states = ["NY"];
states = ["NJ", "NY", "AK"];
and so on...
It is also true in my case that states = ["NJ","NY"] and states = ["NY","NJ"] are the same thing, as order does not matter.
Does anyone have any idea on the most efficient way to do this?
The combination of the following two methods should do what you want. The idea is that if the number of items is n then the number of subsets is 2^n. And if you iterate from 0 to 2^n - 1 and look at the numbers in binary you'll have one digit for each item and if the digit is 1 then you include the item, and if it is 0 you don't. I'm using BigInteger here as int would only work for a collection of less than 32 items and long would only work for less than 64.
public static IEnumerable<IEnumerable<T>> PowerSets<T>(this IList<T> set)
{
var totalSets = BigInteger.Pow(2, set.Count);
for (BigInteger i = 0; i < totalSets; i++)
{
yield return set.SubSet(i);
}
}
public static IEnumerable<T> SubSet<T>(this IList<T> set, BigInteger n)
{
for (int i = 0; i < set.Count && n > 0; i++)
{
if ((n & 1) == 1)
{
yield return set[i];
}
n = n >> 1;
}
}
With that the following code
var states = new[] { "NJ", "AK", "NY" };
foreach (var subset in states.PowerSets())
{
Console.WriteLine("[" + string.Join(",", subset.Select(s => "'" + s + "'")) + "]");
}
Will give you this output.
[]
['NJ']
['AK']
['NJ','AK']
['NY']
['NJ','NY']
['AK','NY']
['NJ','AK','NY']
You can use back-tracking where in each iteration you'll either (1) take the item in index i or (2) do not take the item in index i.
Pseudo code for this problem:
Main code:
Define a bool array (e.g. named picked) of the length of states
Call the backtracking method with index 0 (the first item)
Backtracking function:
Receives the states array, its length, the current index and the bool array
Halt condition - if the current index is equal to the length then you just need to iterate over the bool array and for each item which is true print the matching string from states
Actual backtracking:
Set picked[i] to true and call the backtracking function with the next index
Set picked[i] to false and call the backtracking function with the next index
int highestValue = someList.IndexOf(someList.Max())
someList contains a lot of duplicates and someList.Max() returns the index of the first instance of the highest value.
Is there some trickery I can use (reversing the order of the list?) to get the index of the final occurrence of the highest value in the list, rather than resorting to writing a manual method?
Try this:
int highestValue = someList.LastIndexOf(someList.Max()) ;
All the other answers being completely correct, it must be noted that this requires 2 iterations over the list (one to find the max element, second to find the last index). For a list of integers that's a non-issue, but if the iteration was more complicated, here's an alternative:
var highestValue = someList.Select((val, ind) => new { Value = val, Index = ind })
.Aggregate((x, y) => (x.Value > y.Value) ? x : y)
.Index;
You mean like getting the index of the last occurrence? That would be:
int highestValueIndex = someList.LastIndexOf(someList.Max())
You should, however, be aware of the fact that you're making two passes over the data in both your original code and the code above. If you want to do it in a single pass (and you should only worry about this if your data sets are large), you can do this with something like:
static int LastIndexOfMax(List<int> list)
{
// Empty list, no index.
if (list.Count == 0) return -1;
// Default to first element then check all others.
int maxIdx = 0, maxVal = list[0];
for (int idx = 1; idx < list.Count; ++idx) {
// Higher or equal-and-to-the-right, replace.
if (list[idx] >= maxVal) {
maxIdx = idx;
maxVal = list[idx];
}
}
return maxIdx;
}
Use LastIndexOf
int highestValue = someList.LastIndexOf(someList.Max());
I have a List of longs from a DB query. The total number in the List is always an even number, but the quantity of items can be in the hundreds.
List item [0] is the lower boundary of a "good range", item [1] is the upper boundary of that range. A numeric range between item [1] and item [2] is considered "a bad range".
Sample:
var seekset = new SortedList();
var skd= 500;
while( skd< 1000000 )
{
seekset.Add(skd, 0);
skd = skd+ 100;
}
If an input number is compared to the List items, if the input number is between 500-600 or 700-800 it is considered "good", but if it is between 600-700 it is considered "bad".
Using the above sample, can anyone comment on the right/fast way to determine if the number 655 is a "bad" number, ie not within any good range boundary (C#, .NET 4.5)?
If a SortedList is not the proper container for this (eg it needs to be an array), I have no problem changing, the object is static (lower case "s") once it is populated but can be destroyed/repopulated by other threads at any time.
The following works, assuming the list is already sorted and both of each pair of limits are treated as "good" values:
public static bool IsGood<T>(List<T> list, T value)
{
int index = list.BinarySearch(value);
return index >= 0 || index % 2 == 0;
}
If you only have a few hundred items then it's really not that bad. You can just use a regular List and do a linear search to find the item. If the index of the first larger item is even then it's no good, if it's odd then it's good:
var index = data.Select((n, i) => new { n, i })
.SkipWhile(item => someValue < item.n)
.First().i;
bool isValid = index % 2 == 1;
If you have enough items that a linear search isn't desirable then you can use a BinarySearch to find the next largest item.
var searchValue = data.BinarySearch(someValue);
if (searchValue < 0)
searchValue = ~searchValue;
bool isValid = searchValue % 2 == 1;
I am thinking that LINQ may not be best suited for this problem because IEnumerable forgets about item[0] when it is ready to process item[1].
Yes, this is freshman CS, but the fastest in this case may be just
// untested code
Boolean found = false;
for(int i=0; i<seekset.Count; i+=2)
{
if (valueOfInterest >= seekset[i] &&
valueOfInterest <= seekset[i+1])
{
found = true;
break; // or return;
}
}
I apologize for not directly answering your question about "Best approach in Linq", but I sense that you are really asking about best approach for performance.
I have a list which contains some items of type string.
List<string> lstOriginal;
I have another list which contains idices which should be removed from first list.
List<int> lstIndices;
I'd tried to do the job with RemoveAt() method ,
foreach(int indice in lstIndices)
{
lstOriginal.RemoveAt(indice);
}
but it crashes and said me that "index is Out of Range."
You need to sort the indexes that you would like to return from largest to smallest in order to avoid removing something at the wrong index.
foreach(int indice in lstIndices.OrderByDescending(v => v))
{
lstOriginal.RemoveAt(indice);
}
Here is why: let's say have a list of five items, and you'd like to remove items at indexes 2 and 4. If you remove the item at 2 first, the item that was at index 4 would be at index 3, and index 4 would no longer be in the list at all (causing your exception). If you go backwards, all indexes would be there up to the moment when you're ready to remove the corresponding item.
How are you populating the list of indices? There's a much more efficient RemoveAll method that you might be able to use. For example, instead of this:
var indices = new List<int>();
int index = 0;
foreach (var item in data)
if (SomeFunction(data))
indices.Add(index++);
//then some logic to remove the items
you could do this:
data.RemoveAll(item => SomeFunction(item));
This minimizes the copying of items to new positions in the array; each item is copied only once.
You could also use a method group conversion in the above example, instead of a lambda:
data.RemoveAll(SomeFunction);
The reason this is happening is because when you remove an item from the list, the index of each item after it effectively decreases by one, so if you remove them in increasing index order and some items near the end of the original list were to be removed, those indices are now invalid because the list becomes shorter as the earlier items are removed.
The easiest solution is to sort your index list in decreasing order (highest index first) and then iterate across that.
for (int i = 0; i < indices.Count; i++)
{
items.RemoveAt(indices[i] - i);
}
My in-place deleting of given indices as handy extension method. It copies all items only once so it is much more performant if large amount of indicies is to be removed.
It also throws ArgumentOutOfRangeException in case where index to remove is out of bounds.
public static class ListExtensions
{
public static void RemoveAllIndices<T>(this List<T> list, IEnumerable<int> indices)
{
//do not remove Distinct() call here, it's important
var indicesOrdered = indices.Distinct().ToArray();
if(indicesOrdered.Length == 0)
return;
Array.Sort(indicesOrdered);
if (indicesOrdered[0] < 0 || indicesOrdered[indicesOrdered.Length - 1] >= list.Count)
throw new ArgumentOutOfRangeException();
int indexToRemove = 0;
int newIdx = 0;
for (int originalIdx = 0; originalIdx < list.Count; originalIdx++)
{
if(indexToRemove < indicesOrdered.Length && indicesOrdered[indexToRemove] == originalIdx)
{
indexToRemove++;
}
else
{
list[newIdx++] = list[originalIdx];
}
}
list.RemoveRange(newIdx, list.Count - newIdx);
}
}
var array = lstOriginal.ConvertAll(item => new int?(item)).ToArray();
lstIndices.ForEach(index => array[index] = null);
lstOriginal = array.Where(item => item.HasValue).Select(item => item.Value).ToList();
lstIndices.OrderByDescending(p => p).ToList().ForEach(p => lstOriginal.RemoveAt((int)p));
As a side note, in foreach statements, it is better not to modify the Ienumerable on which foreach is running. The out of range error is probably as a result of this situation.