I would like to learn if it is possible to modify the list being iterated over using forEach so that there is no need to maintain an index.
var scanResults = await someFunction();
for (int i = 0; i < scanResults.Data.Count(); i++)
{
if ((scanResults.Data.ToList()[i].Filters.Count() == 0) != (scanResults.Data.ToList()[i].SubscribedFilters.Count() == 0))
{
scanResults.Data.ToList()[i] = await AddFilters(scanResults.Data.ToList()[i]);
}
}
return scanResults;
Note as mentioned by John and myself in the comment that in your code you are using ToList() (presumably System.Linq) in the nested check and statements (which is likely a logical mistake); which means you are creating a new list each time. Assuming that you reference one list throughout your nested statement, you will run into InvalidOperationException with a message of Collection Was Modified.
Scenario 1: Stick with for(;;) loop
The benefit of this is you don't need to use additional memory to create a temporary list or an alternative list.
Scenario 2: foreach with a temporary duplicate list to modify
If you really insist on using foreach loop then one simple option is to create an identical list with the same data and iterate through that. Depending on what you are doing within the list this might not work. The downside with this approach is you are using additional memory to store the duplicate list.
Lots of the code is not given in the problem so we can't guarantee which would make more sense in your situation. However in most cases you would try to stick with the for(;;) loop.
I can't see the rest of your code so I can only guess at the data types you are using but it'd be something like this.
foreach (var scanResult in ScanResults)
{
if ((scanResult.Data.ToList().Filters.Count() == 0) != (scanResult.Data.ToList().SubscribedFilters.Count() == 0))
{
scanResult.Data.ToList() = await AddFilters(scanResult.Data.ToList());
}
}
Related
I using a ConcurrentBag as Collection that is thread safe, to avoid conflicts when i am updating my collection in differents threads.
But i notice that sometimes the itens get inverse, my firts item goes to the last position of my collection.
I just wanna know if this may be happening due to change the collection in concurrency. If it's not possible what could may be messing up my collection?
Edit: I'm adding some sample code.
When i need to add a item i make this:
var positionToInsert = (int)incremental.MDEntryPositionNo - 1;
concurrentList.ToList().Insert(positionToInsert, myListInfoToInsert);
In some cases i need to update a position so i do like this
var foundPosition = concurrentList.ToList()
.FirstOrDefault(orderBook => orderBook.BookPosition == incremental.MDEntryPositionNo);
var index = concurrentList.ToList().IndexOf(foundPosition);
if (index != -1)
{
concurrentList.ToList()[index] = infoToUpdate;
}
Thaks!
Edited: Just use sorting, don't use insertion it's a slow operation.
var orderedBooks = concurrentList.OrderBy(x=>x.BookPosition).ToList();
ConcurrentBag is implemented as a linked list, and the ToList code is shown below.
For each input thread, created own ThreadLocalList or reused free.
The head of a linked list is always the same, and I don't understand how the order can change in this situation. However you can't guarantee that the last item added won't be in the first bucket.
Please add your sample code.
private List<T> ToList()
{
List<T> objList = new List<T>();
for (ConcurrentBag<T>.ThreadLocalList threadLocalList = this.m_headList; threadLocalList != null; threadLocalList = threadLocalList.m_nextList)
{
for (ConcurrentBag<T>.Node node = threadLocalList.m_head; node != null; node = node.m_next)
objList.Add(node.m_value);
}
return objList;
}
I have about 100 items (allRights) in the database and about 10 id-s to be searched (inputRightsIds). Which one is better - first to get all rights and then search the items (Variant 1) or to make 10 checking requests requests to the database
Here is some example code:
DbContext db = new DbContext();
int[] inputRightsIds = new int[10]{...};
Variant 1
var allRights = db.Rights.ToLIst();
foreach( var right in allRights)
{
for(int i>0; i<inputRightsIds.Lenght; i++)
{
if(inputRightsIds[i] == right.Id)
{
// Do something
}
}
}
Variant 2
for(int i>0; i<inputRightsIds.Lenght; i++)
{
if(db.Rights.Any(r => r.Id == inputRightsIds[i]);)
{
// Do something
}
}
Thanks in advance!
As other's have already stated you should do the following.
var matchingIds = from r in db.Rights
where inputRightIds.Contains(r.Id)
select r.Id;
foreach(var id in matchingIds)
{
// Do something
}
But this is different from both of your approaches. In your first approach you are making one SQL call to the DB that is returning more results than you are interested in. The second is making multiple SQL calls returning part of the information you want with each call. The query above will make one SQL call to the DB and return only the data you are interested in. This is the best approach as it reduces the two bottle necks of making multiple calls to the DB and having too much data returned.
You can use following :
db.Rights.Where(right => inputRightsIds.Contains(right.Id));
They should be very similar speeds since both must enumerate the arrays the same number of times. There might be subtle differences in speed between the two depending on the input data but in general I would go with Variant 2. I think you should almost always prefer LINQ over manual enumeration when possible. Also consider using the following LINQ statement to simplify the whole search to a single line.
var matches = db.Rights.Where(r=> inputRightIds.Contains(r.Id));
...//Do stuff with matches
Not forget get all your items into memory to process list further
var itemsFromDatabase = db.Rights.Where(r => inputRightsIds.Contains(r.Id)).ToList();
Or you could even enumerate through collection and do some stuff on each item
db.Rights.Where(r => inputRightsIds.Contains(r.Id)).ToList().Foreach(item => {
//your code here
});
Current Code:
For each element in the MapEntryTable, check the properties IsDisplayedColumn and IsReturnColumn and if they are true then add the element to another set of lists, its running time would be O(n), there would be many elements with both properties as false, so will not get added to any of the lists in the loop.
foreach (var mapEntry in MapEntryTable)
{
if (mapEntry.IsDisplayedColumn)
Type1.DisplayColumnId.Add(mapEntry.OutputColumnId);
if (mapEntry.IsReturnColumn)
Type1.ReturnColumnId.Add(mapEntry.OutputColumnId);
}
Following is the Linq version of doing the same:
MapEntryTable.Where(x => x.IsDisplayedColumn == true).ToList().ForEach(mapEntry => Type1.DisplayColumnId.Add(mapEntry.OutputColumnId));
MapEntryTable.Where(x => x.IsReturnColumn == true).ToList().ForEach(mapEntry => Type1.ReturnColumnId.Add(mapEntry.OutputColumnId));
I am converting all such foreach code to linq, as I am learning it, but my question is:
Do I get any advantage of Linq conversion in this case or is it a disadvantage ?
Is there a better way to do the same using Linq
UPDATE:
Consider the condition where out of 1000 elements in the list 80% have both properties false, then does where provides me a benefit of quickly finding elements with a given condition.
Type1 is a custom type with set of List<int> structures, DisplayColumnId and ReturnColumnId
ForEach ins't a LINQ method. It's a method of List. And not only is it not a part of LINQ, it's very much against the very values and patterns of LINQ. Eric Lippet explains this in a blog post that was written when he was a principle developer on the C# compiler team.
Your "LINQ" approach also:
Completely unnecessarily copies all of the items to be added into a list, which is both wasteful in time and memory and also conflicts with LINQ's goals of deferred execution when executing queries.
Isn't actually a query with the exception of the Where operator. You're acting on the items in the query, rather than performing a query. LINQ is a querying tool, not a tool for manipulating data sets.
You're iterating the source sequence twice. This may or may not be a problem, depending on what the source sequence actually is and what the costs of iterating it are.
A solution that uses LINQ as much as is it is designed for would be to use it like so:
foreach (var mapEntry in MapEntryTable.Where(entry => mapEntry.IsDisplayedColumn))
list1.DisplayColumnId.Add(mapEntry.OutputColumnId);
foreach (var mapEntry in MapEntryTable.Where(entry => mapEntry.IsReturnColumn))
list2.ReturnColumnId.Add(mapEntry.OutputColumnId);
I would say stick with the original way with the foreach loop, since you are only iterating through the list 1 time over.
also your linq should look more like this:
list1.DisplayColumnId.AddRange(MapEntryTable.Where(x => x.IsDisplayedColumn).Select(mapEntry => mapEntry.OutputColumnId));
list2.ReturnColumnId.AddRange(MapEntryTable.Where(x => x.IsReturnColumn).Select(mapEntry => mapEntry.OutputColumnId));
The performance of foreach vs Linq ForEach are almost exactly the same, within nano seconds of each other. Assuming you have the same internal logic in the loop in both versions when testing.
However a for loop, outperforms both by a LARGE margin. for(int i; i < count; ++i) is much faster than both. Because a for loop doesn't rely on an IEnumerable implementation (overhead). The for loop compiles to x86 register index/jump code. It maintains an incrementor, and then it's up to you to retrieve the item by it's index in the loop.
Using a Linq ForEach loop though does have a big disadvantage. You cannot break out of the loop. If you need to do that you have to maintain a boolean like "breakLoop = false", set it to true, and have each recursive exit if breakLoop is true... Bad performing there. Secondly you cannot use continue, instead you use "return".
I never use Linq's foreach loop.
If you are dealing with linq, e.g.
List<Thing> things = .....;
var oldThings = things.Where(p.DateTime.Year < DateTime.Now.Year);
That internally will foreach with linq and give you back only the items with a year less than the current year. Cool..
But if I am doing this:
List<Thing> things = new List<Thing>();
foreach(XElement node in Results) {
things.Add(new Thing(node));
}
I don't need to use a linq for each loop. Even if I did...
foreach(var node in thingNodes.Where(p => p.NodeType == "Thing") {
if (node.Ignore) {
continue;
}
thing.Add(node);
}
even though I could write that cleaner like
foreach(var node in thingNodes.Where(p => p.NodeType == "Thing" && !node.Ignore) {
thing.Add(node);
}
There is no real reason I can think of to do this..>
things.ForEach(thing => {
//do something
//can't break
//can't continue
return; //<- continue
});
And if I want the fastest loop possible,
for (int i = 0; i < things.Count; ++i) {
var thing = things[i];
//do something
}
Will be faster.
Your LINQ isn't quite right as you're converting the results of Where to a List and then pseudo-iterating over those results with ForEach to add to another list. Use ToList or AddRange for converting or adding sequences to lists.
Example, where overwriting list1 (if it were actually a List<T>):
list1 = MapEntryTable.Where(x => x.IsDisplayedColumn == true)
.Select(mapEntry => mapEntry.OutputColumnId).ToList();
or to append:
list1.AddRange(MapEntryTable.Where(x => x.IsDisplayedColumn == true)
.Select(mapEntry => mapEntry.OutputColumnId));
In C#, to do what you want functionally in one call, you have to write your own partition method. If you are open to using F#, you can use List.Partition<'T>
https://msdn.microsoft.com/en-us/library/ee353782.aspx
Okay, so I wasn't completely sure what headline would fit my problem, but here goes the description:
I have objects than can reference other objects, to create dropdown lists where the content/values is dependant on what values is chosen in "parent" dropdowns.
My dropdown objects contain an id, and a parentId (and other stuff, not relevant here).
I want to prevent the users from making infinite loops, like this:
List 1 (Dependant on list 3)
List 2 (Dependant on list 1)
List 3 (Dependant on list 2)
I've tried writing a recursive method to prevent it, but I cannot figure out the logic.
Could anyone tell me how you would ensure that an object isn't referencing it self "down the line" ? Or provide an example perhaps.
Any help is much appreciated.
The simplest way I can think of is to create a flattened list. Recursively iterate the objects and store each reference in a list. As you find new objects check each one in the list.
You'll either encounter an object referencing itself or run out of objects to search.
This method being suitable will depend on your requirements, speed / memory/ number of items in the list.
Since all object contain an id the list could store/check that instead if you need to check value equality instead of reference equality
If you have written a recursive function to manage those lists, one solution could be to create a list of elements and pass it as parameter into the recursive function and en each iteration add the current item to the list. To stop the recursive function, only check if the current item has been added previously to the list.
If you iterate through the actual elements of each list by relying on specific counters for each list you shouldn't find any problem. The most likely way to provoke an infinite loop is changing the value of a counter from an external source. Example:
for(int i = 0; i < max_i; i++)
{
if(val1[i] != null)
{
for(int j = 0; j < max_j; j++)
{
if(val2[j] != null)
{
//Delete or anything
//YOU CANNOT AFFECT NEITHER i NOR j DIRECTLY.
}
}
}
If you want to account for varying values of j in the internal part, you should rely on a different variable. Example:
if(val2[j] != null)
{
int j2 = j;
//Do whatever with j2, never with j
}
By doing this (associating different counters to different loops), no endless loop will occur. An endless loop occurs when: i = 1, 2, 3, 4 and suddenly i is changed to 2 by an "external source"; thus solution: NEVER change i other than through the for loop.
Thanks everyone for your input on this. I went with James suggestion using a list and ended up with the following code (which may or may not make sense for anyone else but me)
public static bool BadParent(int fieldId, int childId, List<int> list)
{
if (list == null)
list = new List<int>();
bool returnValue = true;
var field = EkstraFelterBLL.getEkstraFeltUdfraEkstraFeltId(fieldId);
if (field != null)
{
if (field.ParentEkstraFeltId == childId)
returnValue = false; //loop reference, fail
else if (list.Contains(field.EkstraFeltId))
returnValue = false; //already been in the cycle, fail
else
{
list.Add(field.EkstraFeltId);
returnValue = BadParent(field.ParentEkstraFeltId, childId, list);
}
}
return returnValue;
}
For now, the best I could think of is:
bool oneMoreTime = true;
while (oneMoreTime)
{
ItemType toDelete=null;
oneMoreTime=false;
foreach (ItemType item in collection)
{
if (ShouldBeDeleted(item))
{
toDelete=item;
break;
}
}
if (toDelete!=null)
{
collection.Remove(toDelete);
oneMoreTime=true;
}
}
I know that I have at least one extra variable here, but I included it to improve the readability of the algorithm.
The "RemoveAll" method is best.
Another common technique is:
var itemsToBeDeleted = collection.Where(i=>ShouldBeDeleted(i)).ToList();
foreach(var itemToBeDeleted in itemsToBeDeleted)
collection.Remove(itemToBeDeleted);
Another common technique is to use a "for" loop, but make sure you go backwards:
for (int i = collection.Count - 1; i >= 0; --i)
if (ShouldBeDeleted(collection[i]))
collection.RemoveAt(i);
Another common technique is to add the items that are not being removed to a new collection:
var newCollection = new List<whatever>();
foreach(var item in collection.Where(i=>!ShouldBeDeleted(i))
newCollection.Add(item);
And now you have two collections. A technique I particularly like if you want to end up with two collections is to use immutable data structures. With an immutable data structure, "removing" an item does not change the data structure; it gives you back a new data structure (that re-uses bits from the old one, if possible) that does not have the item you removed. With immutable data structures you are not modifying the thing you're iterating over, so there's no problem:
var newCollection = oldCollection;
foreach(var item in oldCollection.Where(i=>ShouldBeDeleted(i))
newCollection = newCollection.Remove(item);
or
var newCollection = ImmutableCollection<whatever>.Empty;
foreach(var item in oldCollection.Where(i=>!ShouldBeDeleted(i))
newCollection = newCollection.Add(item);
And when you're done, you have two collections. The new one has the items removed, the old one is the same as it ever was.
Just as I finished typing I remembered that there is lambda-way to do it.
collection.RemoveAll(i=>ShouldBeDeleted(i));
Better way?
A forward variation on the backward for loop:
for (int i = 0; i < collection.Count; )
if (ShouldBeDeleted(collection[i]))
collection.RemoveAt(i)
else
i++;
You cannot delete from a collection inside a foreach loop (unless it is a very special collection having a special enumerator). The BCL collections will throw exceptions if the collection is modified while it is being enumerated.
You could use a for loop to delete individual elements and adjust the index accordingly. However, doing that can be error prone. Depending on the implementation of the underlying collection it may also be expensive to delete individual elements. For instance deleting the first element of a List<T> will copy all the remaning elements in the list.
The best solution is often to create a new collection based on the old:
var newCollection = collection.Where(item => !ShouldBeDeleted(item)).ToList();
Use ToList() or ToArray() to create the new collection or initialize your specific collection type from the IEnumerable returned by the Where() clause.
The lambda way is good. You could also use a regular for loop, you can iterate lists that a for loop uses within the loop itself, unlike a foreach loop.
for (int i = collection.Count-1; i >= 0; i--)
{
if(ShouldBeDeleted(collection[i])
collection.RemoveAt(i);
}
I am assuming that collection is an arraylist here, the code might be a bit different if you are using a different data structure.