iterate through rows and cells using for loop in c# Selenium - c#

I have already checked many answers but all are using a foreach loop to iterate through rows. I want to use a for loop.
Code:
IWebElement NtTable = driver.FindElement(By.Id("nt-item-table"));
IReadOnlyCollection<IWebElement> TableRows = NtTable.FindElements(By.TagName("tr")).ToList();
for (int i = 0; i <= TableRows.Count; i++)
{
//driver.FindElement(TableRows[i])....
//Tablows.get(i)....
}
I tried above commented 2 lines to get text of particular TD from particular TR but seems not getting any property after press (.). Above seems fine if I use Java but not in C#.

There are several issues.
You don't need to chain the .FindElement() calls. It's more efficient to just get the TRs in one search, e.g. by CSS selector #nt-item-table tr.
You are calling .ToList() after the .FindElements() call which isn't necessary and won't work. It shows an error in the IDE.
You are iterating to <= TableRows.Count when you only need <. The index starts at 0.
You can't access elements in the IReadOnlyCollection using array notation, e.g. TableRows[i]. You need to use LINQ and ElementAt(i).
You already have found the TR elements, statements like driver.FindElement(TableRows[i]).... make no sense.
Tablows is not a variable name that you have defined. The variable is TableRows.
IReadOnlyCollection<IWebElement> TableRows = Driver.FindElements(By.CssSelector("#nt-item-table tr"));
for (int i = 0; i < TableRows.Count; i++)
{
// do something with TableRows.ElementAt(i);
}
You really should spend some time reading some Selenium and basic C# programming tutorials, syntax references, etc. It will help you avoid a lot of these issues.

I changed the variables to start lowercase (camelcase) and have you tried this. Since you already have a list of tr elements which have child elements (td)
IWebElement ntTable = driver.FindElement(By.Id("nt-item-table"));
IReadOnlyCollection<IWebElement> tableRows = ntTable.FindElements(By.TagName("tr")).ToList();
for (int i = 0; i <= tableRows.Count; i++)
{
var tdCollection = tableRows[i].FindElements(By.TagName("td"));
for (int c = 0; c <= tdCollection.Count; c++)
{
string column = tdCollection[c].Text;
}
}
I would prefer foreach though. Because you do not need index int's then.

Related

Index Out of Range on accessing List<List<T>> with For loops

Hobbyist start C# coder here. Think I am missing something basic here. I am trying to create a new List by parsing through a List> using two for loops. I am getting Index Out of Range although from what I can tell in debugging, there is data in the Deal object in the [index][index] location being accessed.
List<List<Deal>> Deals = await Database.LoadRecordsAsync(form, depts);
for (int dept = 0; dept <= Deals.Count; dept++)
{
List<Deal> batch = new List<Deal>();
for (int deal = Deals[dept].Count; deal >= 0; deal--)
{
batch.Add(Deals[dept][deal]); // Error here
}
}
Deals in debugging has indexes as expected with data as expected. Am I initializing something incorrectly?
The problem is the following line:
deal = Deals[dept].Count
This line should change as below:
deal = Deals[dept].Count - 1
as well the upper bound of the first for. The following
dept <= Deals.Count
should change as below:
dept < Deals.Count
Generally speaking, if you declare an array of n items the last item of the array can be accessed by using the index n-1.
That being said if you declare deal as Deals[dept].Count and later on you attempt to read this:
Deals[dept][deal]
you are out of the range of the array you have defined.

Binary search slower, what am I doing wrong?

EDIT: so it looks like this is normal behavior, so can anyone just recommend a faster way to do these numerous intersections?
so my problem is this. I have 8000 lists (strings in each list). For each list (ranging from size 50 to 400), I'm comparing it to every other list and performing a calculation based on the intersection number. So I'll do
list1(intersect)list1= number
list1(intersect)list2= number
list1(intersect)list888= number
And I do this for every list. Previously, I had HashList and my code was essentially this: (well, I was actually searching through properties of an object, so I
had to modify the code a bit, but it's basically this:
I have my two versions below, but if anyone knows anything faster, please let me know!
Loop through AllLists, getting each list, starting with list1, and then do this:
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
foreach (string word in list1)
{
if (block.generator_list.Contains(word))
{
//simple integer count
}
}
}
// a little more code, but the same, but looping through the other list if it's smaller/bigger
Then I make the lists into regular lists, and applied Sort(), which changed my code to
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
for (int i = 0; i < list1_length; i++)
{
var test = list.BinarySearch(list1[i]);
if (test > -1)
{
//simple integer count
}
}
}
The first version takes about 6 seconds, the other one takes more than 20 (I just stop there cuz otherwise it would take more than a minute!!!) (and this is for a smallish subset of the data)
I'm sure there's a drastic mistake somewhere, but I can't find it.
Well I have tried three distinct methods for achieving this (assuming I understood the problem correctly). Please note I have used HashSet<int> in order to more easily generate random input.
setting up:
List<HashSet<int>> allSets = new List<HashSet<int>>();
Random rand = new Random();
for(int i = 0; i < 8000; ++i) {
HashSet<int> ints = new HashSet<int>();
for(int j = 0; j < rand.Next(50, 400); ++j) {
ints.Add(rand.Next(0, 1000));
}
allSets.Add(ints);
}
the three methods I checked (code is what runs in the inner loop):
the loop:
note that you are getting duplicated results in your code (intersecting set A with set B and later intersecting set B with set A).
It won't affect your performance thanks to the list length check you are doing. But iterating this way is clearer.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
}
}
first method:
used IEnumerable.Intersect() to get the intersection with the other list and checked IEnumerable.Count() to get the size of the intersection.
var intersect = allSets[i].Intersect(allSets[j]);
count = intersect.Count();
this was the slowest one averaging 177s
second method:
cloned the smaller set of the two sets I was intersecting, then used ISet.IntersectWith() and checked the resulting sets Count.
HashSet<int> intersect;
HashSet<int> intersectWith;
if(allSets[i].Count < allSets[j].Count) {
intersect = new HashSet<int>(allSets[i]);
intersectWith = allSets[j];
} else {
intersect = new HashSet<int>(allSets[j]);
intersectWith = allSets[i];
}
intersect.IntersectWith(intersectWith);
count = intersect.Count;
}
}
this one was slightly faster, averaging 154s
third method:
did something very similar to what you did iterated over the shorter set and checked ISet.Contains on the longer set.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
count = 0;
if(allSets[i].Count < allSets[j].Count) {
loopingSet = allSets[i];
containsSet = allSets[j];
} else {
loopingSet = allSets[j];
containsSet = allSets[i];
}
foreach(int k in loopingSet) {
if(containsSet.Contains(k)) {
++count;
}
}
}
}
this method was by far the fastest (as expected), averaging 66s
conclusion
the method you're using is the fastest of these three. I certainly can't think of a faster single threaded way to do this. Perhaps there is a better concurrent solution.
I've found that one of the most important considerations in iterating/searching any kind of collection is to choose the collection type very carefully. To iterate through a normal collection for your purposes will not be the most optimal. Try using something like:
System.Collections.Generic.HashSet<T>
Using the Contains() method while iterating over the shorter list of two (as you mentioned you're already doing) should give close to O(1) performance, the same as key lookups in the generic Dictionary type.

Modulus usage when dealing with odd numbers

I have a list of roughly 50~60 items that I want to be able to divide into multiple columns dynamically. I'm using a nested for loop and the lists divide properly when there are an even number of items. However, when there are an odd number of items the remainder (modulus) items get left out. I've been playing around with it for a while and have not struck gold yet. I'm hoping someone smarter than me can & will assist.
Thanks.
for (int fillRow = 0; fillRow < numOfCols; fillRow++)
{
for (int fillCell = 0; fillCell < (siteTitles.Count / numOfCols); fillCell++)
{
linkAddress = new HyperLink();
linkAddress.Text = tempSites[fillCell].ToString();
linkAddress.NavigateUrl = tempUrls[fillCell].ToString();
mainTbl.Rows[fillCell].Cells[fillRow].Controls.Add(linkAddress);
}
}
Well yes, the problem is here:
fillCell < (siteTitles.Count / numOfCols)
That division will round down, so for example if there are 13 titles and numOfCols is 5, it will give 2 - which means that items 10-12 won't be used.
I suggest that actually you loop over all the items instead, and work out the row and column for each item:
for (int i = 0; i < siteTitles.Count; i++)
{
int row = i / numOfCols;
int col = i % numOfCols;
// Fill in things using row, col and i
}
(It's not exactly clear what you're doing as you're using siteTitles in the loop condition and tempSites in the loop body, and you're not using fillRow when extracting the data... basically I think you've still got some bugs...)

Row Index provided is out of range, even after check

My current code:
Remove()
{
for (int i = 0; i < ConGridView.RowCount; i++)
{
if (ConGridView.Rows[i].Cells[0].Value.ToString() == Address)
{
ConGridView.Rows.RemoveAt(i);
break;
}
}
}
So what I am trying to call the remove function every time a client disconnect. the function will remove the connection address from the datagridview. It works well when clients are disconnection one by one. However, if 100 connections gets dropped and it tries to remove 100 connections in less than a second, than it errors out saying "Row Index provided is out of range". How should I check for that ?
So far I've tried:
Try, catch.
if (ConGridView.Rows[i] != null), if (i < ConGridView.RowCount)
None of it seem to work so far. I've also got results using (i < ConGridView.RowCount) where i is 26 while RowCount is 24, but the remove at function still activates..
Any idea on this ?
You can't do this. Your code loops through all the rows in ConGridView, but it deletes them as you do. Therefore, at some point you will try to access an item you have deleted, which will cause the error you described.
Probably the best approach it to iterate through the rows in reverse order. This way, deleting a row at the end won't affect when you access rows at the start.
The problem is you initialise your for loop with the current count of rows and then start removing those same rows from the datagridview. At some point your for loop will try to remove a row at an index that is greater than the number of rows left.
Try this instead:
for (int i = ConGridView.RowCount - 1; i >= 0; i--)
{
if (ConGridView.Rows[i].Cells[0].Value.ToString() == Address)
{
ConGridView.Rows.RemoveAt(i);
break;
}
}
why dont you get the total count to a separate variable and then iterate
Remove()
{
int totalConnections = ConGridView.RowCount;
for (int i = 0; i < totalConnections ; i++)
{
if (ConGridView.Rows[i].Cells[0].Value.ToString() == Address)
{
ConGridView.Rows.RemoveAt(i);
break;
}
}
}
This issue is becuase you are modifying the collection your are iterating over. It will be better if you use a temporary array and two loops to remove your entries.
Remove()
// You can use an array/list or whatever you want below.
Collection<DataGridViewRow> rowsToDelete = new Collection<DataGridViewRow>();
for (int i = 0; i < ConGridView.RowCount; i++)
{
if (ConGridView.Rows[i].Cells[0].Value.ToString() == Address)
{
rowsToDelete.Add(ConGridView.Rows[i]);
break;
}
}
// now remove the marked entries.
foreach(DataGridViewRow deletedRow in rowsToDelete)
{
ConGridView.Rows.Remove(deletedRow);
}
When you remove an item from an array, it is reconstructed; shifting the remaining elements up by one to remove the gap of the index you have removed.
1. guybrush threepwood
2. murray
3. elaine
4. Jimmy Gibbs Jr.
If you remove 2. item in here; it becomes this:
1. guybrush threepwood
2. elaine
3. Jimmy Gibbs Jr.
When you are iterating, imagine:
for (int i = 0; i < myArray.Count; i++)
{
if (i == 2) myArray.RemoveAt(i);
}
While running this, when i = 3, the element at 3 has changed, you expect it to be 'elaine' but it is 'Jimmy Gibbs Jr.'. One way to fix this is decrease i by one if we delete it, making sure that i refers to correct value.
for (int i = 0; i < myArray.Count; i++)
{
if (i == 2)
{
myArray.RemoveAt(i);
i--;
}
}
I would go for LINQ in this case, though, everything is easier with that.
myArray.RemoveAll(x => x == "murray");
I've tried all the suggestions posted by everyone here, however, the error was still there.
I've solved the problem using a different way... I've switched to TreeNodeView since that's what I was going to use ultimately. Now I can remove as many connection as i want with:
For each(TreeNode TN in ConTreeView)
{
ConTreeView.Nodes.Remove(TN);
}

C# Best way to parse flat file with dynamic number of fields per row

I have a flat file that is pipe delimited and looks something like this as example
ColA|ColB|3*|Note1|Note2|Note3|2**|A1|A2|A3|B1|B2|B3
The first two columns are set and will always be there.
* denotes a count for how many repeating fields there will be following that count so Notes 1 2 3
** denotes a count for how many times a block of fields are repeated and there are always 3 fields in a block.
This is per row, so each row may have a different number of fields.
Hope that makes sense so far.
I'm trying to find the best way to parse this file, any suggestions would be great.
The goal at the end is to map all these fields into a few different files - data transformation. I'm actually doing all this within SSIS but figured the default components won't be good enough so need to write own code.
UPDATE I'm essentially trying to read this like a source file and do some lookups and string manipulation to some of the fields in between and spit out several different files like in any normal file to file transformation SSIS package.
Using the above example, I may want to create a new file that ends up looking like this
"ColA","HardcodedString","Note1CRLFNote2CRLF","ColB"
And then another file
Row1: "ColA","A1","A2","A3"
Row2: "ColA","B1","B2","B3"
So I guess I'm after some ideas on how to parse this as well as storing the data in either Stacks or Lists or?? to play with and spit out later.
One possibility would be to use a stack. First you split the line by the pipes.
var stack = new Stack<string>(line.Split('|'));
Then you pop the first two from the stack to get them out of the way.
stack.Pop();
stack.Pop();
Then you parse the next element: 3* . For that you pop the next 3 items on the stack. With 2** you pop the next 2 x 3 = 6 items from the stack, and so on. You can stop as soon as the stack is empty.
while (stack.Count > 0)
{
// Parse elements like 3*
}
Hope this is clear enough. I find this article very useful when it comes to String.Split().
Something similar to below should work (this is untested)
ColA|ColB|3*|Note1|Note2|Note3|2**|A1|A2|A3|B1|B2|B3
string[] columns = line.Split('|');
List<string> repeatingColumnNames = new List<string();
List<List<string>> repeatingFieldValues = new List<List<string>>();
if(columns.Length > 2)
{
int repeatingFieldCountIndex = columns[2];
int repeatingFieldStartIndex = repeatingFieldCountIndex + 1;
for(int i = 0; i < repeatingFieldCountIndex; i++)
{
repeatingColumnNames.Add(columns[repeatingFieldStartIndex + i]);
}
int repeatingFieldSetCountIndex = columns[2 + repeatingFieldCount + 1];
int repeatingFieldSetStartIndex = repeatingFieldSetCountIndex + 1;
for(int i = 0; i < repeatingFieldSetCount; i++)
{
string[] fieldSet = new string[repeatingFieldCount]();
for(int j = 0; j < repeatingFieldCountIndex; j++)
{
fieldSet[j] = columns[repeatingFieldSetStartIndex + j + (i * repeatingFieldSetCount))];
}
repeatingFieldValues.Add(new List<string>(fieldSet));
}
}
System.IO.File.ReadAllLines("File.txt").Select(line => line.Split(new[] {'|'}))

Categories

Resources