Lambda ForEach with Index - c#

Here are a list of column names:
var colNames = new List<string> { "colE", "colL", "colO", "colN" };
Based on the position of the column names in the list, I want to make that column's visible index equal to the position of the column name, but without returning a list. In other words, the following lambda expression without "ToList()" at the end:
colNames.Select((x, index) => { grid_ctrl.Columns[x].VisibleIndex = index; return x; }).ToList();
Can this be coded in a one-line lambda expression?

Use a loop to make side-effects. Use queries to compute new data from existing data:
var updates =
colNames.Select((x, index) => new { col = grid_ctrl.Columns[x].VisibleIndex, index })
.ToList();
foreach (var u in updates)
u.col.VisibleIndex = u.index;
Hiding side-effects in queries can make for nasty surprises. We can still use a query to do the bulk of the work.
You could also use List.ForEach to make those side-effects. That approach is not very extensible, however. It is not as general as a query.

Yes, here you are:
colNames.ForEach((x) => grid_ctrl.Columns[x].VisibleIndex = colNames.IndexOf(x));
Note that you need unique strings in your list, otherwise .IndexOf will behave badly.
Unfortunately LINQ .ForEach, as its relative foreach doesn't provide an enumeration index.

Related

Loop to set OrderBy/ThenBy for multiple columns

I'm using DataTables.Mvc library for use with jQuery DataTables.
One of the methods is GetSortedColumns() which returns an array containing configurations for each column to be sorted.
Of interest in this object are the Name and SortDirection properties. Name is also the database table field name. SortDirection is either asc or desc.
At first ThenBy and ThenByDescending were undefined symbols, so I created ordered as IOrderedQueryable. This resolves the symbols, but I don't see any effect of these. Neither OrderBy, OrderByDescending, ThenBy nor ThenByDescending have any effect on the order of records in filteredRecords.
In Controller:
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult GetUserSelections([ModelBinder(typeof(DataTablesBinder))] IDataTablesRequest requestModel)
{
// Column Sort
var filteredRecords = db.AspNetSelectorInputs.Select(si => si);
var sortedColumns = requestModel.Columns.GetSortedColumns();
var count = 0;
foreach (var column in sortedColumns)
{
var ordered = filteredRecords as IOrderedQueryable<AspNetSelectorInput>;
filteredRecords =
column.SortDirection == DataTables.Mvc.Column.OrderDirection.Ascendant
? count == 0
? ordered.OrderBy(c => column.Name)
: ordered.ThenBy(c => column.Name)
: count == 0
? ordered.OrderByDescending(c => column.Name)
: ordered.ThenByDescending(c => column.Name);
count++;
}
filteredRecords = filteredRecords.Select(si => si).Skip(requestModel.Start).Take(requestModel.Length);
....
Can anyone see why this doesn't affect ordering of filteredRecords?
Is there a better way?
It is sorting, on exactly what you've asked it to. But the lambda expressions aren't doing what you think. For example, you're doing .OrderBy(c => column.Name), which is sorting using a literal value of the name of the column which has the same value for every item in the collection (notice how the thing it is sorting on is not affected by c), so it appears not to sort your collection. For example, you might as well be doing .OrderBy(c => "Hello").
You would need to do something like .OrderBy(c => c.YourChoiceOfPropertyName). Except you can't do that because (presumably) the name of the property is a string value in column.Name. So you'll need to use reflection within the lambda to get the value of that property using c as the instance. This will need fixing on all the lambdas. For example, inside the loop:
var propertyInfo=typeof(AspNetSelectorInput)
.GetProperty(column.Name);
And replacement lambda expressions:
c=>propertyInfo.GetValue(c)
P.S. the two instances of .Select(si => si) seem to be redundant, unless I am missing something.

How to set properties using Linq statement

Instead of doing this horrible loop which does achieve the desired result :
foreach (var mealsViewModel in mealsListCollection)
{
foreach (var VARIABLE in mealsViewModel.Items)
{
foreach (var d in VARIABLE.ArticlesAvailable)
{
d.ArticleQty = 0;
}
}
}
I'm trying to achieve the same result but with this linQ statement :
mealsListCollection.ForEach(u =>
u.Items.Select(o => o.ArticlesAvailable.Select(c =>
{
c.ArticleQty = 0;
return c;
})));
But the linQ statement does not reset ArticleQty to zero
What I am doing wrong? and why ?
Change your linq to ForEach cause Select does not iterate through collection in the way you want.
MSDN definition:-
Select Projects each element of a sequence into a new form.
ForEach Performs the specified action on each element of the List.
mealsListCollection.ForEach(u =>
u.Items.ForEach(o =>
o.ArticlesAvailable.ForEach(c =>
{
c.ArticleQty = 0;
})));
Use SelectMany to work through trees of nested lists. Use the ForEach function last to do the work:
mealsListCollection
.SelectMany(m => m.Items)
.SelectMany(i => i.ArticlesAvailable)
.ToList()
.ForEach(a => { a.ArticleQty = 0; });
What you are doing wrong is: select is returning your same collection, but has no effect until the objects are iterated over. Sitting in the foreach call, the selects are outside of the execution path. (Review comments for more information).
.select() in a call by itself does nothing special but determine what the returned list will look like.
.select().ToList() iterates over the collection, applying the projection.
If you were to set a variable equal to the .select call, but never access the data inside it, then the values would essentially still be what they started as. As soon as you iterate over, or select a specific element, it would then apply the projections.
Changing the selects to foreachs per vasily's comments will give you the desired results.
Can I perhaps suggest that you look to set the value equal to 0 further up your stack ( or down)? - Without knowing your use case, maybe there Is a better place to default it back to 0 than where you have chosen?
(automapper, Initializer, etc )

Performance Improvement Tips for ForEach loop in C#?

I need to optimize the below foreach loop. The foreach loop is taken more time to get the unique items.
Instead can the FilterItems be converted into a list collection. If so how to do it. Then i will take unique items easily from it.
The problem arises when i have 5,00,000 items in FilterItems.
Please suggest some ways to optimize the below code:
int i = 0;
List<object> order = new List<object>();
List<object> unique = new List<object>();
// FilterItems IS A COLLECTION OF RECORDS. CAN THIS BE CONVERTED TO A LIST COLLECTION DIRECTLY, SO THAT I CAN TAKE THE UNIQUE ITEMS FROM IT.
foreach (Record rec in FilterItems)
{
string text = rec.GetValue(“Column Name”);
int position = order.BinarySearch(text);
if (position < 0)
{
order.Insert(-position - 1, text);
unique.Add(text);
}
i++;
}
It's unclear what you mean by "converting FilterItems into a list" when we don't know anything about it, but you could definitely consider sorting after you've got all the items, rather than as you go:
var strings = FilterItems.Select(record => record.GetValue("Column Name"))
.Distinct()
.OrderBy(x => x)
.ToList();
The use of Distinct() here will avoid sorting lots of equal items - it looks like you only want distinct items anyway.
If you want unique to be in the original order but order to be the same items, just sorted, you could use:
var unique = FilterItems.Select(record => record.GetValue("Column Name"))
.Distinct()
.ToList();
var order = unique.OrderBy(x => x).ToList();
Now Distinct() isn't guaranteed to preserve order - but it does so in the current implementation, and that's the most natural implementation, too.

How do I use Linq with a HashSet of Integers to pull multiple items from a list of Objects?

I have a HashSet of ID numbers, stored as integers:
HashSet<int> IDList; // Assume that this is created with a new statement in the constructor.
I have a SortedList of objects, indexed by the integers found in the HashSet:
SortedList<int,myClass> masterListOfMyClass;
I want to use the HashSet to create a List as a subset of the masterListOfMyclass.
After wasting all day trying to figure out the Linq query, I eventually gave up and wrote the following, which works:
public List<myclass> SubSet {
get {
List<myClass> xList = new List<myClass>();
foreach (int x in IDList) {
if (masterListOfMyClass.ContainsKey(x)) {
xList.Add(masterListOfMyClass[x]);
}
}
return xList;
}
private set { }
}
So, I have two questions here:
What is the appropriate Linq query? I'm finding Linq extremely frustrating to try to figuere out. Just when I think I've got it, it turns around and "goes on strike".
Is a Linq query any better -- or worse -- than what I have written here?
var xList = IDList
.Where(masterListOfMyClass.ContainsKey)
.Select(x => masterListOfMyClass[x])
.ToList();
If your lists both have equally large numbers of items, you may wish to consider inverting the query (i.e. iterate through masterListOfMyClass and query IDList) since a HashSet is faster for random queries.
Edit:
It's less neat, but you could save a lookup into masterListOfMyClass with the following query, which would be a bit faster:
var xList = IDList
.Select(x => { myClass y; masterListOfMyClass.TryGetValue(x, out y); return y; })
.Where(x => x != null)
.ToList();
foreach (int x in IDList.Where(x => masterListOfMyClass.ContainsKey(x)))
{
xList.Add(masterListOfMyClass[x]);
}
This is the appropriate linq query for your loop.
Here the linq query will not effective in my point of view..
Here is the Linq expression:
List<myClass> xList = masterListOfMyClass
.Where(x => IDList.Contains(x.Key))
.Select(x => x.Value).ToList();
There is no big difference in the performance in such a small example, Linq is slower in general, it actually uses iterations under the hood too. The thing you get with ling is, imho, clearer code and the execution is defered until it is needed. Not i my example though, when I call .ToList().
Another option would be (which is intentionally the same as Sankarann's first answer)
return (
from x in IDList
where masterListOfMyClass.ContainsKey(x)
select masterListOfMyClass[x]
).ToList();
However, are you sure you want a List to be returned? Usually, when working with IEnumerable<> you should chain your calls using IEnumerable<> until the point where you actually need the data. There you can decide to e.g. loop once (use the iterator) or actually pull the data in some sort of cache using the ToList(), ToArray() etc. methods.
Also, exposing a List<> to the public implies that modifying this list has an impact on the calling class. I would leave it to the user of the property to decide to make a local copy or continue using the IEnumerable<>.
Second, as your private setter is empty, setting the 'SubSet' has no impact on the functionality. This again is confusing and I would avoid it.
An alternate (an maybe less confusing) declaration of your property might look like this
public IEnumerable<myclass> SubSet {
get {
return from x in IDList
where masterListOfMyClass.ContainsKey(x)
select masterListOfMyClass[x]
}
}

What's the most (performance) efficient and readable way to 'split' a generic list based on a condition?

(highly simplified example)
I have a generic list of strings:
var strings = new List<string> { "abc", "owla", "paula", "lala", "hop" };
I'm looking for the most efficient way to split this list into a list with elements that meet a condition and a list of elements that don't meet that same condition.
Func<string, bool> condition = s => s.IndexOf("o") > -1;
Predicate<string> kickOut = s => s.IndexOf("o") > -1;
var stringsThatMeetCondition = strings.Where(condition);
strings.RemoveAll(kickOut);
var stringsThatDontMeetCondition = strings;
Is there a way to do this with looping only once through the original list?
Use some linq:
var matches = list.Select(s => s.IndexOf("o") > -1).ToList();
var notMatches = list.Except(matches).ToList();
list.Clear();
list.AddRange(matches);
Update: as has been mentioned in the comments, be careful mutating the list as linq methods try to be on-demand, they will not iterate the list until you start looking into the IEnumerable. However in my case, I call ToList, which effectively causes it to run through the entire list of items.
This would do it:
IEnumerable<T> FilterAndRemove(this List<T> list, Func<T, bool> pred)
{
List<T> filtered = new List<T>();
int i = 0;
while(i < list.Count)
{
if (pred(list[i]))
{
filtered.Add(list[i]);
list.RemoveAt(i);
}
else
{
++i;
}
}
return list;
}
But am sure you have already thought of something similar. Can you please update your answer with the kind of efficiency that you seek?
Note that two filtering runs with pred and !pred over the original list would still be O(n) and not at all inefficient. Especially considering that you'd get the full benefit of lazy evaluation for both result sets. See also Rob's answer.
This algorithm is in O(n^2).
Instead removing each element, you can also collect them in a new list and copy them over to the input list before returning. This will also get you O(n).
One more option for O(n) would be switching to a linked list.
Why not just use
var stringsThatMeetCondition = strings.Where(condition);
var stringsThatDontMeetCondition = strings.Where(x => !condition(x));
Of course, you end up applying the condition to each element in the list twice. To avoid this you might want to write a generic splitting function, which wouldn't be as neat.
Func<string, bool> condition = ...;
var groupedStrings = strings.GroupBy(condition)
var stringsMeetingCondition = groupedStrings.FirstOrDefault(g => g.Key);
var stringsNotMeetingCondition = groupedStrings.FirstOrDefault(g => !g.Key);

Categories

Resources