collection was modified enumeration operation may not execute thread safe c# - c#

I received an error when creating one thread to add to listbox from one list
here is code
private void textBoxSearch_TextChanged(object sender, EventArgs e)
{
listBoxSuggest.Items.Clear();
{
string temp = ((TextBox)sender).Text;
mythread = new Thread(()=> UpdateListBox(temp) );
mythread.Start();
}
}
private void UpdateListBox(string queyt)
{
if (queyt !=null)
{
if (myPrefixTree.Find(queyt))
{
var match = myPrefixTree.GetMatches(queyt);
foreach (string item in match)
this.Invoke((MethodInvoker)(() => listBoxSuggest.Items.Add(item)));
}
}
}
I received an error
Collection was modified; enumeration operation may not execute.
I need a solution to problem...
update...
While running the program, I received an error in
foreach (string item in match)

The problem is that you called something, such as .Add or .Remove, which edits the content of your enumeration while it was being iterated over. this causes the iteration to fail, because now it's not sure whether to proceed with the new element (which may have an index set before the current index) or skip the old element (which may have already been processed or may even be the current item).
You need to make sure that any loop which may modify contents of the loop it's calling over instead iterates over a copy of that enumeration. ToArray and ToList can both server this purpose -
foreach(var item in collection.ToArray()) ...
- or -
foreach(var item in collection.ToList()) ...
This means that when something inevitably calls collection.Add somewhere within the body of your loop, it modifies the original collection, not the one being iterated, and thus preventing errors. It can, however, mean that it will process over something that was removed earlier in the iteration, in which case you may need a more complicated soltion.

Related

How to do a delay after every iteration in a foreach loop?

I want to add a delay after every iteration in a foreach loop, so the Treat method gets called only every 2 seconds. I do not want to use Thread.Sleepbecause I want to still be able to use my program while the loop is running. I'm trying to tell the foreach loop to wait 2 seconds before doing the next iteration.
Here's what I have so far:
public async void HandlingAsync(ListViewItem woodItem)
{
IList<ListViewItem> woodList = new List<ListViewItem>();
woodList.Add(woodItem);
foreach(ListViewItem Item in woodList)
{
await Task.Delay(2000);
Treat(Item);
}
}
public void Treat(ListViewItem woodItem)
{
woodItem.SubItems[3].Text = "dried";
woodItem.SubItems[4].Text = "Ok";
}
This doesn't work because the await command doesn't affect the foreach loop, only the commands inside of it. I can confirm that, because the ListView changes its items all at the same time, while it should change one item and wait 2 seconds before changing the next item.
EDIT: Marc Gravell ist right. The foreach loop absolutely respects the await command. So his answer works 100%. My problem was neither in the foreach loop nor in the await command. It didn't work because my HandlingAsync Method got called multiple times. This results in calling the Treat Method almost instantly muliple times. That means, that every woodItem got changed at the same time with a delay of 2 seconds. To solve this, the HandlingAsync should only be called once. Thanks for your help.
foreach respects await just fine; here's an example in a console:
static async Task Main()
{
string[] woodList = { "maple", "ash", "oak", "elm", "birch" };
foreach (string wood in woodList)
{
Treat(wood);
await Task.Delay(2000);
}
}
private static void Treat(string wood)
{
Console.WriteLine($"{DateTime.Now}: treating {wood}");
}
the output for me is (with obvious pauses):
28/08/2018 15:45:10: treating maple
28/08/2018 15:45:12: treating ash
28/08/2018 15:45:14: treating oak
28/08/2018 15:45:16: treating elm
28/08/2018 15:45:18: treating birch
So: if your code isn't behaving as expected - the problem isn't with the foreach/await. Can you perhaps show more of the surrounding context, and indicate what makes you think it isn't working?

Removing items from listBox over time - C#;

I'm currently working on a chatting program and the idea is to make it a secret one (Kind of like Facebook has the secret chat function).
My messages are sent to a listBox component and I want that every 10 or 'n' seconds the oldest message would get deleted. I
was trying to mark every message with an index but didn't quite understand how that works.
What I'm asking if maybe you guys know a function or could help me write one that does just that. I'm using Visual Studio 2015 Windows Forms, C#.
Well, when you have a ListBox, the items are all indexed since it's an object collection (an array of objects). Starting from 0 and going upwards for newer entries.
So let's say we add 3 items to our ListBox
listBox1.Items.Add("Item 1"); //Index 0
listBox1.Items.Add("Item 2"); //Index 1
listBox1.Items.Add("Item 3"); //Index 2
All you would have to do, is create a thread that runs in the background that deletes the item at index 0 (the oldest entry) each time.
new Thread(() =>
{
while(true)
{
if(listBox1.Items.Count > 0) //Can't remove any items if we don't have any.
{
Invoke(new MethodInvoker(() => listBox1.Items.RemoveAt(0))); //Remove item at index 0.
//Needs invoking since we're accessing 'listBox1' from a separate thread.
}
Thread.Sleep(10000); //Wait 10 seconds.
}
}).Start(); //Spawn our thread that runs in the background.
In C# WinForms a ListBox contains ListBoxItems which are a ObjectCollection (msdn-link)
So you can add any Object you like, the message which will be displayed comes from the DisplayMember
So for example
public class MyMessage {
public DateTime Received { get; set; }
public string Message { get; set; }
public string DisplayString
{
get { return this.ToString(); }
}
public string ToString() {
return "[" + Received.ToShortTimeString() + "] " + Message;
}
}
can be added as ListBoxItem.
Setting the DisplayMember to "DisplayString" (more here) will get you the correct output.
now you can iterate through the ListBoxItems, cast them as MyMessage and check the time when they were received.
I don't know if you thought about this but here's a way you could achieve this task.
First create an List of strings
List<string> list1 = new List<string>();
To use the List feature you will have to include collections in the form
using System.Collections;
Now comes the tricky part.
First declare a static integer variable globally i.e. outside all classes.
static int a;
Whenever you receive a message(considering your messages will be in string format) you've to add that string to list1 which you created.
list1.Add("the received message");
Now you've to declare a Timer (If you're new, check out how timers work). Windows forms already has timers, using that will be preferable.
The timer sends a Tick event after the desired time.
private void timer1_Tick(object sender, EventArgs e)
{
a = list1.Count() - 1; //Count will return the number of items in the list, you subtract 1 because the indexes start from 0
list1.RemoveAt(a);
listBox.Items.Clear();
foreach(string x in list1)
{
listBox.Items.Add(x);
}
}
What this code will do is, at every Tick event of the timer it will refresh the listbox, remove the last element from the array, and refill the listbox with the rest.
To use the timer just drag and drop it on the form. It's all GUI based and easy to figure out.
Let me know if you've doubts.
Tip: Make maximum use of try{} & catch{} blocks to avoid app crashes.

Is a lock possible per instance of an object?

I have understood that lock() locks a region of lines of code, other threads cannot access the locked line(s) of code. EDIT: this turns out to be just wrong.
Is it also possible to do that per instance of an object? EDIT: yes, that's is just the difference between static and non-static.
E.g. a null reference is checked during a lazy load, but in fact there is no need to lock other objects of the same type?
object LockObject = new object();
List<Car> cars;
public void Method()
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
lock(LockObject)
{
cars.Remove(car)
}
}
}
}
return cars;
}
EDIT, would this be a correct way to write this code:
Because when cars == null and thread A locks it, then another thread B will wait. Then when A is ready, B continues, but should check again whether cars == null, otherwise the code will execute again.
But this looks unnatural, I never saw such a pattern.
Note that locking the first null-check would mean that you acquire a lock even to check for null and every time again and again .. so that is not good.
public void Method()
{
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
cars.Remove(car)
}
}
}
}
}
return cars;
}
It's important to realise that locking is very much a matter of the object locked on.
Most often we want to lock particular blocks of code entirely. As such we use a readonly field to lock a section and hence prevent any other running of that code either at all (if the field is static) or for the given instance (if the field is not static). However, that is a matter of the most common use, not the only possible use.
Consider:
ConcurrentDictionary<string, List<int>> idLists = SomeMethodOrSomething();
List<int> idList;
if (idLists.TryGetValue(someKey, out idList))
{
lock(idList)
{
if (!idList.Contains(someID))
idList.Add(someID);
}
}
Here "locked" section can be called simultaneously by as many threads as you can get going. It cannot, however, be called simultaneously on the same list. Such a usage is unusual, and one has to be sure that nothing else can try to lock on one of the lists (easy if nothing else can access idLists or access any of the lists either before or after they are added into it, hard otherwise), but it does come up in real code.
But the important thing here is that obtaining the idList is itself threadsafe. When it came to creating a new idList this more narrowly-focused locking would not work.
Instead we'd have to do one of two things:
The simplest is just lock on a readonly field before any operation (the more normal approach)
The other is to use GetOrAdd:
List<int> idList = idLists.GetOrAdd(someKey, () => new List<int>());
lock(idList)
{
if (!idList.Contains(someID))
idList.Add(someID);
}
Now an interesting thing to note here is that GetOrAdd() doesn't guarantee that if it calls the factory () => new List<int>() that the result of that factor is what will be returned. Nor does it promise to call it only once. Once we move away from the sort of code that just locks on a readonly field the potential races get more complicated and more thought has to go into them (in this case the likely thought would be that if a race means more than one list is created, but only one is ever used and the rest get GC'd then that's fine).
To bring this back to your case. While the above shows that it's possible to lock not just as finely as your first example does, but much more finely again, the safety of it depends on the wider context.
Your first example is broken:
cars = Lookup(..)
foreach (car in cars.ToList()) // May be different cars to that returned from Lookup. Is that OK?
{
if (car.IsBroken()) // May not be in cars. Is that OK?
{ // IsBroken() may now return false. Is that OK?
lock(LockObject)
When the ToList() is called it may not be calling it on the same instance that was put into cars. This is not necessarily a bug, but it very likely is. To leave it you have to prove that the race is safe.
Each time a new car is obtained, again cars may have been over-written in the meantime. Each time we enter the lock the state of car may have changed so that IsBroken() will return false in the meantime.
It's possible for all of this to be fine, but showing that they are fine is complicated.
Well, it tends to be complicated when it is fine, sometimes complicated when it's not fine, but most often it's very simple to get the answer, "no, it is not okay". And in fact that is the case here, because of one last point of non-thread-safety that is also present in your second example:
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
cars.Remove(car)
}
}
}
}
}
return cars; // Not thread-safe.
Consider, thread 1 examines cars and finds it null. Then it obtains a lock, checks that cars is still null (good), and if it is it sets it to a list it obtained from Lookup and starts removing "broken" cars.
Now, at this point thread 2 examines cars and finds it not-null. So it returns cars to the caller.
Now what happens?
Thread 2 can find "broken" cars in the list, because they haven't been remove yet.
Thread 2 can skip past cars because the list's contents are being moved by Remove() around while it is working on it.
Thread 2 can have the enumerator used by a foreach throw an exception because List<T>.Enumerator throws if you change the list while enumerating and the other thread is doing that.
Thread 2 can have an exception thrown that List<T> should never throw because Thread 1 is half-way in the middle of one of its methods and its invariants only hold before and after each method call.
Thread 2 can obtain a bizarre franken-car because it read part of a car before a Remove() and part after it. (Only if the the type of Car is a value-type; reads and writes of references is always individually atomic).
All of this is obviously bad. The problem is that you are setting cars before it is in a state that is safe for other threads to look at. Instead you should do one of the following:
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..).RemoveAll(car => car.IsBroken());
}
}
}
return cars;
This doesn't set anything in cars until after the work on it has been done. As such another thread can't see it until it's safe to do so.
Alternatively:
if (cars == null)
{
var tempCars = Lookup(..).RemoveAll(car => car.IsBroken());
lock(LockObject)
{
if (cars == null)
{
cars = tempCars;
}
}
}
return cars;
This hold the lock for less time, but at the cost of potentially doing wasteful work just to throw it away. If it's safe to do this at all (it might not be) then there's a trade-off between potential extra time on the first few look-ups for less time in the lock. It's some times worth it, but generally not.
The best strategy to perform lazy initializing is by using properties for fields:
private List<Car> Cars
{
get
{
lock (lockObject)
{
return cars ?? (cars = Lockup(..));
}
}
}
Using your lock object here also makes sure, that no other thread also creates an instance of it.
Add and remove operations have also to be performed while locked:
void Add(Car car)
{
lock(lockObject) Cars.Add(car);
}
void Remove(Car car)
{
lock(lockObject) Cars.Remove(car);
}
Recognize the use of the Cars property to access the list!
Now you can get a copy of your list:
List<Car> copyOfCars;
lock(lockObject) copyOfCars = Cars.ToList();
Then it is possible to safely remove certain objects from the original list:
foreach (car in copyOfCars)
{
if (car.IsBroken())
{
Remove(car);
}
}
But be sure to use your own Remove(car) method which is locked inside.
Especially for List there is another possibility to cleanup elements inside:
lock(lockObject) Cars.RemoveAll(car => car.IsBroken());

How can i load items back to the listView control from text file and also add checkBoxes near each item faster?

In this method i'm reading a text file from my hard disk and add the items to the listView.
I also changed in the form1 designer on the listView propeties the property CheckBoxes to true.
Now when i'm running my program it's taking like 10-15 seconds to load it up all.
The form1 constructor:
LoadtoListView();
And the method LoadtoListView:
private void LoadtoListView()
{
int countit = 0;
using (StreamReader sr = new StreamReader(#"c:\listviewfile\databaseEN.txt"))
{
while (-1 < sr.Peek())
{
try
{
string name = sr.ReadLine();
string email = sr.ReadLine();
var lvi = new ListViewItem(name.Substring(name.IndexOf(":") + 1));
lvi.SubItems.Add(email.Substring(email.IndexOf(":") + 1));
listView1.Items.Add(lvi);
countit++;
}
catch (Exception) { }
}
sr.Close();
numberofforums = countit;
}
}
There are 547 items to load and 547 checkBoxes.
I tested now if i change in the designer the listView property of the CheckBoxes to false again it will load fast about 1-2 seconds.
But once i'm turning this property of the CheckBoxes to true it's tkaing more then 10-15 seconds to load.
I guess the problem is that it's taking time to draw all the CheckBoxes.
Is there any way to make it all faster ?
There exist a couple of ways you could make that faster. Actually you could make it blazingly fast, checkboxes or no checkboxes.
Your code requires a few tweaks here and there and you must think about reusability, concern separation and parameterisation.
For instance, the method name LoadToListView is already doing too much work. It loads stuff, and it also populates the listview.
There are 3 ingredients that can get you to Nirvana.
First of all
consider creating an up front materialised list of ListViewItem instances, more particularly, create a primitive array, such as this (by the way, I will also sprinkle some other good practices along the way, even though it is not the lacking of those practices which causes your delays):
public ListViewItem[] LoadItems(string filePath) {
// not hardcoding the filePath is a good idea
List<ListViewItem> accumulator = new List<ListViewItem>();
int countit = 0;
using (StreamReader sr = new StreamReader(filePath)) {
while (-1 < sr.Peek()) {
try
string name = sr.ReadLine();
string email = sr.ReadLine();
var lvi = new ListViewItem(name.Substring(name.IndexOf(":") + 1));
lvi.SubItems.Add(email.Substring(email.IndexOf(":") + 1));
// instead of adding this item to the list
// --> no more this:: listView1.Items.Add(lvi);
// just "accumulate" it
accumulator.Add(lvi);
countit++;
}
catch (Exception) { }
}
// no need to manually close the reader
// sr.Close();
// the using clause will close it for you
numberofforums = countit;
}
return accumulator.ToArray();
}
Okay.. So we've created an array of ListViewItem.
"So what?" you might think.
Well, for each and every Add invocation on your ListView, the ListView will try to react graphically (even if the GUI thread is occupied it will still try). What this reaction is is not for this answer to be concerned. What you must understand is that instead of Add you could call AddRange which takes a primitive array of ListViewItem as its parameter. That will cause just one graphical reaction for all the set of ListViewItem instances which means it will speed up your app a lot.
So here's ingredient number 2
Make another method which calls LoadItems and then calls AddRange on the ListView:
public void SomeOtherPlace() {
string filePath = #"....";
ListViewItem[] items = LoadItems(filePath);
this.listView1.Items.AddRange( items );
}
This will have already given your app the extra speed you were looking for, but even if the next step will not make your app elegant, it will surely help.
And ingredient number 3 :: Asynchrony
It would be grand if your UI didn't freeze while the LoadItems method was being called.
This "non freezing" ability can be achieved in many ways, but the most modern and coolest way is to use Task<T> and the async and await operators introduced in .NET 4.5 and C# 5.0.
If you don't have a clue about what these things are then, just enjoy the first two ingredients but don't hesitate to learn about the entities I've mentioned.
Basically what you need to do is:
make sure you can't possibly call SomeOtherPlace() twice, since what we're about to do is to make this a possibility. So if you have a button's event handler, for instance, which is calling SomeOtherPlace then we should disable that button and reenable it once we're done
we will make the SomeOtherPlace() method be an async method, which allows it to await tasks
we will run the LoadItems code on a separate thread all nicely wrapped in a Task<ListViewItem[]> and await it on the GUI thread
Let's go. The first change is this:
public void SomeOtherPlace() {
becomes
public async void SomeOtherPlace() {
Secondly, we disable the button I talked about:
public async void SomeOtherPlace() {
this.button1.Enabled = false;
...
this.button1.Enabled = true;
}
Third, we turn this line:
ListViewItem[] items = LoadItems(filePath);
into this:
ListViewItem[] items = await Task.Factory.StartNew(() => LoadItems(filePath));
Now your method should look something like this:
public async void SomeOtherPlace() {
string filePath = #"....";
ListViewItem[] items = await Task.Factory.StartNew(() => LoadItems(filePath));
this.listView1.Items.AddRange( items );
}
Hope I didn't forget anything.
Good luck and don't settle for not understanding how things work under the hood!
Use listView1.Items.AddRange instead of adding one ListViewItem at a time. This should improve your load time.

How to check if a recursive method has ended?

I've just finished making this recursive method:
/// <summary>
/// Recursively process a given directory and add its file to Library.xml
/// </summary>
/// <param name="sourceDir">Source directory</param>
public void ProcessDir(string sourceDir)
{
string[] fileEntries = Directory.GetFiles(sourceDir, "*.mp3");
foreach (string fileName in fileEntries)
{
Song newSong = new Song();
newSong.ArtistName = "test artist";
newSong.AlbumName = "test album";
newSong.Name = "test song title";
newSong.Length = 1234;
newSong.FileName = fileName;
songsCollection.Songs.Add(newSong);
}
string[] subdirEntries = Directory.GetDirectories(sourceDir);
foreach (string subdir in subdirEntries)
{
if ((File.GetAttributes(subdir) & FileAttributes.ReparsePoint) != FileAttributes.ReparsePoint)
{
ProcessDir(subdir);
}
}
}
Everything is working as expected, the only problem I'm having is: How do I know when this method finishes execution? Is there something made for that very purpose in .NET?
There's nothing special in .NET that tells you this... Basically, the first call to ProcessDir will return after the recursion has ended.
Well, you could always put a line of code indicating the end of execution after your initial ProcessDir call:
ProcessDir("MyDir");
Console.WriteLine("Done!");
You could try using a global variable to keep track of it.
private int _processDirTrack = 0;
public void ProcessDir(string sourceDir)
{
_processDirTrack++; // Increment at the start of each
string[] fileEntries = Directory.GetFiles(sourceDir, "*.mp3");
foreach (string fileName in fileEntries)
{
Song newSong = new Song();
newSong.ArtistName = "test artist";
newSong.AlbumName = "test album";
newSong.Name = "test song title";
newSong.Length = 1234;
newSong.FileName = fileName;
songsCollection.Songs.Add(newSong);
}
string[] subdirEntries = Directory.GetDirectories(sourceDir);
foreach (string subdir in subdirEntries)
{
if ((File.GetAttributes(subdir) & FileAttributes.ReparsePoint) != FileAttributes.ReparsePoint)
{
ProcessDir(subdir);
}
}
_processDirTrack--; // Decrement after the recursion. Fall through means it got to
// the end of a branch
if(_processDirTrack == 0)
{
Console.WriteLine("I've finished with all of them.");
}
}
I'm assuming the (albeit correct) answer RQDQ provided isn't the one you are looking for?
In case you have a long running task of which you want to check how far it is along you can use a BackgroundWorker.
Ofcourse this background worker doesn't magically know how much more files to process, so you have to call ReportProgress whenever you can give an estimate of how far along you are.
In order to try to estimate how much longer the processing will take, you could try the following:
Check for total disk space the folder occupies, and keep track of much you already processed.
Check for amount of files you have to process vs. how much you still have to process.
If you're wanting to know when it ends to signal/start another process in your application, you could raise an event. This may be over kill, but hey, it's another way to look at it if it suits your need. You just need to add an event to the class where ProcessDir() is a member.
private int _processDirTrack = 0;
public event EventHandler DirProcessingCompleted;
At the end of your method you would raise your event like so
DirProcessingCompleted(this, new EventArgs());
You subscribe to these events with an eventhandler somewhere else in your code
myClass.DirProcessingCompleted += new EventHandler(ProcessingComplete_Handler);
Note: you don't have to subscribe to an event in this manner; you could also subscribe with a delegate or lambda expression instead.
To wrap it all up you create your method that is called whenever an event is raised by the object.
private void ProcessingComplete_Handler(object sender, EventArgs e)
{
// perform other operations here that are desirable when the event is raised ...
}
I'm guessing what you're trying to figure out is how to know that you've reached the deepest level of recursion that is going to occur.
In this case, you'll know that you're in the last recursive call of the function when subdirEntries is empty, since the function will no longer recurse at that point. That's the best you can do, there's no universal answer as to how to know when a function will cease to recurse. It's entirely dependent upon what the conditions to recurse are.
Edit: Wanted to clarify. This will check each time you end a single chain of recursions. Considering your code can recurse multiple times per call, my solution will only signify the end of a single chain of recursions. In the case of recursively navigating a tree, this will occur at every leaf node, not at the deepest level of recursion.

Categories

Resources