I've been trying for a while but I'd like to modify a specific control's value without looping through all controls to check if a textbox's id property matches a correct value.
Currently this is the code I have but I thought perhaps using LINQ it's more efficient;
for (int i = 0; i < protectMaxPlayers; i++)
{
// Update the protect time.
protect.setProtectTime(i, protect.getProtectTime(i) - 1);
// Set the progressbar.
foreach (ProtectProgressBar pb in pnlProtect.Controls.OfType<ProtectProgressBar>())
{
if (pb.Id == i)
pb.Value = protect.getProtectTime(i);
}
}
}
This loops through ALL the progressbars in order to find the right one.
Is this possible to get shorter?
Thanks in advance.
LINQ will iterate over whole collection of ProgressBars as well, so it's not any better than your current solution.
You should consider preparing Dictionary<string, ProtectProgressBar> and using it to find the correct one using it's ID:
var bars = pnlProtect.Controls.OfType<ProtectProgressBar>().ToDictionary(c => c.Id, c => c);
for (int i = 0; i < protectMaxPlayers; i++)
{
// Update the protect time.
protect.setProtectTime(i, protect.getProtectTime(i) - 1);
ProtectProgressBar bar;
if(bars.TryGetValue(i, out bar))
{
bar.Value = protect.getProtectTime(i);
}
}
Dictionary<TKey, TValue> lookup is done in O(1) time, so it should be better then you current solution.
to accomplish this task, you must:
Import Linq namespace above.
import System.Linq;
Then use the code like that:
for (int i = 0; i < protectMaxPlayers; i++)
{
// Update the protect time.
protect.setProtectTime(i, protect.getProtectTime(i) - 1);
// Set the progressbar.
ProtectProgressBar pb = pnlProtect.Controls.OfType<ProtectProgressBar>().ToList().Find(k => k.ID == i.ToString());
// check if it was found
if (pb != null)
{
// your code
}
}
Regards, Wiliam.
Related
I'm trying to pull all the values from another program's DataGridBox. For that I'm using FlaUi. I made a code that does what I want. However, it is very slow. Is there a faster way to pull up all the values from another program's DataGridView using FlaUi?
my code:
var desktop = automation.GetDesktop();
var window = desktop.FindFirstDescendant(cf => cf.ByName("History: NEWLIFE")).AsWindow();
var table = window.FindFirstDescendant(cf => cf.ByName("DataGridView")).AsDataGridView();
int rowscount = (table.FindAllChildren(cf => cf.ByProcessId(30572)).Length) - 2;
// Remove the last row if we have the "add" row
for (int i = 0; i < rowscount; i++)
{
string string1 = "Row " + i;
string string2 = "Symbol Row " + i;
var RowX = table.FindFirstDescendant(cf => cf.ByName(string1));
var SymbolRowX = RowX.FindFirstDescendant(cf => cf.ByName(string2));
SCAN.Add("" + SymbolRowX.Patterns.LegacyIAccessible.Pattern.Value);
}
var message = string.Join(Environment.NewLine, SCAN);
MessageBox.Show(message);
Thank you in-advance
Searching for descendants is pretty slow as it will go thru all objects in the tree until it finds the desired control (or there are no controls left). It might be much faster to use the grid pattern to find the desired cells or get all rows at once and loop thru them.
Alternatively you could try caching as UIA uses inter process calls which are generally slow. So each Find method or value property does such a call. If you have a large grid, that can sum up pretty badly. For that exact case, using UIA Caching could make sense.
For that, you would get everything you need (all descendants of the table and the LegacyIAccessible pattern) in one go inside a cache request and then loop thru those elements in the code with CachedChildren and such.
A simple example for this can be found at the FlaUI wiki at https://github.com/FlaUI/FlaUI/wiki/Caching:
var grid = <FindGrid>.AsGrid();
var cacheRequest = new CacheRequest();
cacheRequest.TreeScope = TreeScope.Descendants;
cacheRequest.Add(Automation.PropertyLibrary.Element.Name);
using (cacheRequest.Activate())
{
var rows = _grid.Rows;
foreach (var row in rows)
{
foreach (var cell in row.CachedChildren)
{
Console.WriteLine(cell.Name);
}
}
}
In my application I want the program to search through a list, testing each list element. If the list element is the required length I then want this to be inserted into a new list. Below is the code I have already
List<string> foo = new List<string>();
List<string> newFoo = new List<string>();
for (int h = 0; h < l; h++);
{
// Here I want to search through every element of foo and if the element
// length is greater than say 5 i want to add it to the newFoo
}
I don't know how to search through each element and any examples I can find use LINQ which I don't want to do as I'm sure there is a simpler way. Any help much appreciated.
It sounds like you're looking for a foreach loop:
foreach (string element in foo)
{
if (element.Length > 5)
{
newFoo.Add(element);
}
}
However, assuming you start with an empty newFoo list, this is better done with LINQ:
List<string> newFoo = foo.Where(x => x.Length > 5).ToList();
Or if you already have an existing list, you can use:
newFoo.AddRange(foo.Where(x => x.Length > 5));
(In my experience it's more common to be creating a new list, mind you.)
If you're new to C#, you should probably make sure you understand the first form before you move on to use LINQ, lambda expressions etc.
Note that if you really, really want to use a straight for loop instead of a foreach loop, you can do so:
for (int i = 0; i < foo.Count; i++)
{
string element = foo[i];
if (element.Length > 5)
{
newFoo.Add(element);
}
}
... but I'd strongly recommend using foreach any time you want to iterate over a sequence and don't really care about the index of each entry.
You may use something like this (foreach loop):
foreach (String item in foo)
if (!Object.ReferenceEquals(null, item)) // <- be careful with nulls!
if (item.Length > 5)
newFoo.Add(item);
Or if you prefer index based access
for (int i = 0; i < foo.Count; ++i)
if (!Object.ReferenceEquals(null, foo[i])) // <- be careful with nulls!
if (foo[i].Length > 5)
newFoo.Add(foo[i]);
Yet another possibility is LINQ, e.g.
// Do not forget the nulls...
newFoo.AddRange(foo.Where(item => Object.ReferenceEquals(null, item) ? false : item.Length > 5));
Without Linq, you can do it with a simple loop
foreach(var f in foo)
{
if(f.Length > 5)
{
newFoo.Add(f);
}
}
But with Linq, it's even simpler
newFoo = foo.Where(f => f.Length > 5).ToList()
You can use LINQ to filter items with Length > 5 to your newFoo List
List<string> newFoo = foo.Where(r => r.Length > 5).ToList();
If you want to use simple for loop then:
for (int h = 0; h < foo.Count; h++)
{
if (foo[h] != null && foo[h].Length > 5)
newFoo.Add(foo[h]);
}
(Remember to remove the ; semicolon at the end of your for-loop, currently it will not do anything since it will consider ; as the only statement for the loop to work on)
Let's say I have two List<string>. These are populated from the results of reading a text file
List owner contains:
cross
jhill
bbroms
List assignee contains:
Chris Cross
Jack Hill
Bryan Broms
During the read from a SQL source (the SQL statement contains a join)... I would perform
if(sqlReader["projects.owner"] == "something in owner list" || sqlReader["assign.assignee"] == "something in assignee list")
{
// add this projects information to the primary results LIST
list_by_owner.Add(sqlReader["projects.owner"],sqlReader["projects.project_date_created"],sqlReader["projects.project_name"],sqlReader["projects.project_status"]);
// if the assignee is not null, add also to the secondary results LIST
// logic to determine if assign.assignee is null goes here
list_by_assignee.Add(sqlReader["assign.assignee"],sqlReader["projects.owner"],sqlReader["projects.project_date_created"],sqlReader["projects.project_name"],sqlReader["projects.project_status"]);
}
I do not want to end up using nested foreach.
The FOR loop would probably suffice. Someone had mentioned ZIP to me but wasn't sure if that would be a preferable route to go in my situation.
One loop to iterate through both lists (assuming both have same count):
for (int i = 0; i < alpha.Count; i++)
{
var itemAlpha = alpha[i] // <= your object of list alpha
var itemBeta = beta[i] // <= your object of list beta
//write your code here
}
From what you describe, you don't need to iterate at all.
This is what you need:
http://msdn.microsoft.com/en-us/library/bhkz42b3.aspx
Usage:
if ((listAlpga.contains(resultA) || (listBeta.contains(resultA)) {
// do your operation
}
List Iteration will happen implicitly inside the contains method. And thats 2n comparisions, vs n*n for nested iteration.
You would be better off with sequential iteration in each list one after the other, if at all you need to go that route.
This list is maybe better represented as a List<KeyValuePair<string, string>> which would pair the two list values together in a single list.
There are several options for this. The least "painful" would be plain old for loop:
for (var index = 0; index < alpha.Count; index++)
{
var alphaItem = alpha[index];
var betaItem = beta[index];
// Do something.
}
Another interesting approach is using the indexed LINQ methods (but you need to remember they get evaluated lazily, you have to consume the resulting enumerable), for example:
alpha.Select((alphaItem, index) =>
{
var betaItem = beta[index];
// Do something
})
Or you can enumerate both collection if you use the enumerator directly:
using (var alphaEnumerator = alpha.GetEnumerator())
using (var betaEnumerator = beta.GetEnumerator())
{
while (alphaEnumerator.MoveNext() && betaEnumerator.MoveNext())
{
var alphaItem = alphaEnumerator.Current;
var betaItem = betaEnumerator.Current;
// Do something
}
}
Zip (if you need pairs) or Concat (if you need combined list) are possible options to iterate 2 lists at the same time.
I like doing something like this to enumerate over parallel lists:
int alphaCount = alpha.Count ;
int betaCount = beta.Count ;
int i = 0 ;
while ( i < alphaCount && i < betaCount )
{
var a = alpha[i] ;
bar b = beta[i] ;
// handle matched alpha/beta pairs
++i ;
}
while ( i < alphaCount )
{
var a = alpha[i] ;
// handle unmatched alphas
++i ;
}
while ( i < betaCount )
{
var b = beta[i] ;
// handle unmatched betas
++i ;
}
Here is my code:
foreach (OrderItem item in OrderInfo.order)
{
orderItemViews.Single(i => i.numericUpDown.Name == item.id.ToString()).numericUpDown.Value = item.count;
}
It gives an exception.
I know that I can't change the collection inside foreach
How can I change this code to make it work? Best of all if it would be LINQ code.
exception says that "The collection was modified". sorry can't provide real message of exception because it in non-english
sorry guys. I've found where collection is changing. It was inside *numericUpDown_ValueChanged* handler. anyway I've got an answer. thank you
You can use ToList(), Like this :
foreach (OrderItem item in OrderInfo.order.ToList())
{
orderItemViews.Single(i => i.numericUpDown.Name == item.id.ToString()).numericUpDown.Value = item.count;
}
Or use normal for loop :
for (int i = 0 ; i < OrderInfo.order.Count; i++)
{
OrderItem item = OrderInfo.order[i];
orderItemViews.Single(i => i.numericUpDown.Name == item.id.ToString()).numericUpDown.Value = item.count;
}
Tip : Performance wise, It's better to use the second way.
This is what I do, when I need to modify the collection.
foreach (OrderItem item in OrderInfo.order.ToList())
{
...
}
Create a copy. Enumerate the copy, but update the original one.
You can use an extension ToEach static method:
public delegate void ToEachTransformAction<T>(ref T Telement);
[Extension()]
public static void ToEach<T>(Generic.IList<T> thislist, ToEachTransformAction<T> Transform)
{
for (var n = 0; n < thislist.Count; n++)
{
Transform.Invoke(thislist(n));
}
}
I'm writing a duplicate file detector. To determine if two files are duplicates I calculate a CRC32 checksum. Since this can be an expensive operation, I only want to calculate checksums for files that have another file with matching size. I have sorted my list of files by size, and am looping through to compare each element to the ones above and below it. Unfortunately, there is an issue at the beginning and end since there will be no previous or next file, respectively. I can fix this using if statements, but it feels clunky. Here is my code:
public void GetCRCs(List<DupInfo> dupInfos)
{
var crc = new Crc32();
for (int i = 0; i < dupInfos.Count(); i++)
{
if (dupInfos[i].Size == dupInfos[i - 1].Size || dupInfos[i].Size == dupInfos[i + 1].Size)
{
dupInfos[i].CheckSum = crc.ComputeChecksum(File.ReadAllBytes(dupInfos[i].FullName));
}
}
}
My question is:
How can I compare each entry to its neighbors without the out of bounds error?
Should I be using a loop for this, or is there a better LINQ or other function?
Note: I did not include the rest of my code to avoid clutter. If you want to see it, I can include it.
Compute the Crcs first:
// It is assumed that DupInfo.CheckSum is nullable
public void GetCRCs(List<DupInfo> dupInfos)
{
dupInfos[0].CheckSum = null ;
for (int i = 1; i < dupInfos.Count(); i++)
{
dupInfos[i].CheckSum = null ;
if (dupInfos[i].Size == dupInfos[i - 1].Size)
{
if (dupInfos[i-1].Checksum==null) dupInfos[i-1].CheckSum = crc.ComputeChecksum(File.ReadAllBytes(dupInfos[i-1].FullName));
dupInfos[i].CheckSum = crc.ComputeChecksum(File.ReadAllBytes(dupInfos[i].FullName));
}
}
}
After having sorted your files by size and crc, identify duplicates:
public void GetDuplicates(List<DupInfo> dupInfos)
{
for (int i = dupInfos.Count();i>0 i++)
{ // loop is inverted to allow list items deletion
if (dupInfos[i].Size == dupInfos[i - 1].Size &&
dupInfos[i].CheckSum != null &&
dupInfos[i].CheckSum == dupInfos[i - 1].Checksum)
{ // i is duplicated with i-1
... // your code here
... // eventually, dupInfos.RemoveAt(i) ;
}
}
}
I have sorted my list of files by size, and am looping through to
compare each element to the ones above and below it.
The next logical step is to actually group your files by size. Comparing consecutive files will not always be sufficient if you have more than two files of the same size. Instead, you will need to compare every file to every other same-sized file.
I suggest taking this approach
Use LINQ's .GroupBy to create a collection of files sizes. Then .Where to only keep the groups with more than one file.
Within those groups, calculate the CRC32 checksum and add it to a collection of known checksums. Compare with previously calculated checksums. If you need to know which files specifically are duplicates you could use a dictionary keyed by this checksum (you can achieve this with another GroupBy. Otherwise a simple list will suffice to detect any duplicates.
The code might look something like this:
var filesSetsWithPossibleDupes = files.GroupBy(f => f.Length)
.Where(group => group.Count() > 1);
foreach (var grp in filesSetsWithPossibleDupes)
{
var checksums = new List<CRC32CheckSum>(); //or whatever type
foreach (var file in grp)
{
var currentCheckSum = crc.ComputeChecksum(file);
if (checksums.Contains(currentCheckSum))
{
//Found a duplicate
}
else
{
checksums.Add(currentCheckSum);
}
}
}
Or if you need the specific objects that could be duplicates, the inner foreach loop might look like
var filesSetsWithPossibleDupes = files.GroupBy(f => f.FileSize)
.Where(grp => grp.Count() > 1);
var masterDuplicateDict = new Dictionary<DupStats, IEnumerable<DupInfo>>();
//A dictionary keyed by the basic duplicate stats
//, and whose value is a collection of the possible duplicates
foreach (var grp in filesSetsWithPossibleDupes)
{
var likelyDuplicates = grp.GroupBy(dup => dup.Checksum)
.Where(g => g.Count() > 1);
//Same GroupBy logic, but applied to the checksum (instead of file size)
foreach(var dupGrp in likelyDuplicates)
{
//Create the key for the dictionary (your code is likely different)
var sample = dupGrp.First();
var key = new DupStats() {FileSize = sample.FileSize, Checksum = sample.Checksum};
masterDuplicateDict.Add(key, dupGrp);
}
}
A demo of this idea.
I think the for loop should be : for (int i = 1; i < dupInfos.Count()-1; i++)
var grps= dupInfos.GroupBy(d=>d.Size);
grps.Where(g=>g.Count>1).ToList().ForEach(g=>
{
...
});
Can you do a union between your two lists? If you have a list of filenames and do a union it should result in only a list of the overlapping files. I can write out an example if you want but this link should give you the general idea.
https://stackoverflow.com/a/13505715/1856992
Edit: Sorry for some reason I thought you were comparing file name not size.
So here is an actual answer for you.
using System;
using System.Collections.Generic;
using System.Linq;
public class ObjectWithSize
{
public int Size {get; set;}
public ObjectWithSize(int size)
{
Size = size;
}
}
public class Program
{
public static void Main()
{
Console.WriteLine("start");
var list = new List<ObjectWithSize>();
list.Add(new ObjectWithSize(12));
list.Add(new ObjectWithSize(13));
list.Add(new ObjectWithSize(14));
list.Add(new ObjectWithSize(14));
list.Add(new ObjectWithSize(18));
list.Add(new ObjectWithSize(15));
list.Add(new ObjectWithSize(15));
var duplicates = list.GroupBy(x=>x.Size)
.Where(g=>g.Count()>1);
foreach (var dup in duplicates)
foreach (var objWithSize in dup)
Console.WriteLine(objWithSize.Size);
}
}
This will print out
14
14
15
15
Here is a netFiddle for that.
https://dotnetfiddle.net/0ub6Bs
Final note. I actually think your answer looks better and will run faster. This was just an implementation in Linq.