Selenium List Sort Order Comparison C# - c#

I have a typical web page that has check boxes to filter on and then there is a drop down to sort by low to high and high to low. The list/xpath code below returns 25 elements with varying prices.
What I want to do is assert that when I select the sort option from low to high, that my prices should look like $10, $10, $12, $15, etc. What is my next step here? It has been awhile since I have worked with list and my mind is drawing a blank.
I think I used Linq last time, sorted the results and ran a compare but after looking at this for three hours I have gone brain dead.
Suggestions?
public List<string> GetInitialSortByPrice ()
{
List<String> item = new List<string>();
IReadOnlyList<IWebElement> cells =
Browser.FindElements_byXpath("//h4[#class='price']");
foreach (IWebElement cell in cells)
{
item.Add(cell.Text);
}
return item;
}
Solution....
List<String> item = new List<string>();
IReadOnlyList<IWebElement> cells =
Browser.FindElements_byXpath("//h4[#class='price']");
foreach (IWebElement cell in cells)
{
item.Add(cell.Text.Replace("$", ""));
}
List<decimal> listA = item.Select(s =>
decimal.Parse(s)).ToList();
List<decimal> listB = listA.OrderBy(x => x).ToList();
Assert.IsTrue(listA.SequenceEqual(listB));
This was for the descending order:
List<decimal> listB = listA.OrderByDescending(i => i).ToList();

Related

C# Comparing if two lists have the same order of items (alphabetical)

I'm facing a huge problem with comparing two lists. I just made copy of my first list and I tried to sort it. The problem is, I want to compare my original list and sorted one to see if they have same alphabetical order. I hope I provided enough information for my problem.
Thanks in advance
public void VerifyDataPrijave(string username)
{
List<string> listaTekstova = new List<string>(); //initializing new, empty List
var kartice = Repo.Kartice.CreateAdapter<Unknown>(false).Find(".//div[class='_63fz removableItem _95l5']");
foreach (var kartica in kartice) {
var slika = kartica.Find(".//tag[tagname='img']")[0];
var ime = slika.Find("following-sibling::div")[0];
string text = ime.GetAttributeValue("InnerText").ToString(); //loop through profile cards and getting Names as InnerText in variable text
listaTekstova.Add(text); //adding those "texts" I just found to an empty list initialized before
List<string> novaListaTekstova = new List<string>(listaTekstova); //clone (copy) of the very first one list
novaListaTekstova.Sort(); //sorting that list alphabetically (I suppose, not sure)
}
}
You can use SequenceEqual to compare to IEnumerables. In your case you can do something like this once all sorting has been done:
var isEqual = novaListaTekstova.SequenceEqual(listaTekstova);

How do I remove duplicates from excel range? c#

I've converted cells in my excel range from strings to form a string list and have separated each item after the comma in the original list. I am starting to think I have not actually separated each item, and they are still one whole, trying to figure out how to do this properly so that each item( ie. the_red_bucket_01)is it's own string.
example of original string in a cell 1 and 2:
Cell1 :
the_red_bucket_01, the_blue_duck_01,_the green_banana_02, the orange_bear_01
Cell2 :
the_purple_chair_01, the_blue_coyote_01,_the green_banana_02, the orange_bear_01
The new list looks like this, though I'm not sure they are separate items:
the_red_bucket_01
the_blue_duck_01
the green_banana_02
the orange_bear_01
the_red_chair_01
the_blue_coyote_01
the green_banana_02
the orange_bear_01
Now I want to remove duplicates so that the console only shows 1 of each item, no matter how many there are of them, I can't seem to get my foreah/if statements to work. It is printing out multiple copies of the items, I'm assuming because it is iterating for each item in the list, so it is returning the data that many items.
foreach (Excel.Range item in xlRng)
{
string itemString = (string)item.Text;
List<String> fn = new List<String>(itemString.Split(','));
List<string> newList = new List<string>();
foreach (string s in fn)
if (!newList.Contains(s))
{
newList.Add(s);
}
foreach (string combo in newList)
{
Console.Write(combo);
}
You probably need to trim the strings, because they have leading white spaces, so "string1" is different from " string1".
foreach (string s in fn)
if (!newList.Contains(s.Trim()))
{
newList.Add(s);
}
You can do this much simpler with Linq by using Distinct.
Returns distinct elements from a sequence by using the default
equality comparer to compare values.
foreach (Excel.Range item in xlRng)
{
string itemString = (string)item.Text;
List<String> fn = new List<String>(itemString.Split(','));
foreach (string combo in fn.Distinct())
{
Console.Write(combo);
}
}
As mentioned in another answer, you may also need to Trim any whitespace, in which case you would do:
fn.Select(x => x.Trim()).Distinct()
Where you need to contain keys/values, its better to use Dictionary type. Try changing code with List<T> to Dictionary<T>. i.e.
From:
List<string> newList = new List<string>();
foreach (string s in fn)
if (!newList.Containss))
{
newList.Add(s);
}
to
Dictionary<string, string> newList = new Dictionary<string, string>();
foreach (string s in fn)
if (!newList.ContainsKey(s))
{
newList.Add(s, s);
}
If you are concerned about the distinct items while you are reading, then just use the Distinct operator like fn.Distinct()
For processing the whole data, I can suggest two methods:
Read in the whole data then use LINQ's Distinct operator
Or use a Set data structure and store each element in that while reading the excel
I suggest that you take a look at the LINQ documentation if you are processing data. It has really great extensions. For even more methods, you can check out the MoreLINQ package.
I think your code would probably work as you expect if you moved newList out of the loop - you create a new variable named newList each loop so it's not going to find duplicates from earlier loops.
You can do all of this this more concisely with Linq:
//set up some similar data
string list1 = "a,b,c,d,a,f";
string list2 = "a,b,c,d,a,f";
List<string> lists = new List<string> {list1,list2};
// find unique items
var result = lists.SelectMany(i=>i.Split(',')).Distinct().ToList();
SelectMany() "flattens" the list of lists into a list.
Distinct() removes duplicates.
var uniqueItems = new HashSet<string>();
foreach (Excel.Range cell in xlRng)
{
var cellText = (string)cell.Text;
foreach (var item in cellText.Split(',').Select(s => s.Trim()))
{
uniqueItems.Add(item);
}
}
foreach (var item in uniqueItems)
{
Console.WriteLine(item);
}

C# Join two List<int>, remove duplicates, NO LINQ

This is the idea: I have two List<int> and I want to make a third List<int> with the above mentioned lists joined, without duplicates. I know how to use .Union but I want to make this without using LINQ. So far I have this:
Console.WriteLine("Enter numbers for first list: ");
List<int> firstList = new List<int>{20, 40, 10, 10, 30, 80};
//Console.ReadLine().Split(' ').Select(int.Parse).ToList();
Console.WriteLine("Enter numbers for second list: ");
List<int> secondList = new List<int> {25, 20, 40, 30, 10 };
//Console.ReadLine().Split(' ').Select(int.Parse).ToList();
List<int> newList = new List<int>();
foreach (var item in firstList)
{
if (secondList.Contains(item))
{
continue;
}
}
newList.Sort();
newList.ForEach(p => Console.WriteLine(p));
And I am actually stuck...I think that I need to iterate each one of the lists and if the items are equal, add them just once to the new list...But I can't seem to figure out how to do that if the lists are different count.
Any ideas?
This is presented with a big (and I do mean big) caveat - it's going to be slow. You will get much better performance from using LINQ or a different collection (eg. HashSet). This approach is O(n^2) whereas LINQ etc. is O(n).
Simply loop over the second list adding the value to the first if it's not already in the list.
foreach (var item in secondList)
{
if (!firstList.Contains(item))
{
firstList.Add(item);
}
}
Given that you want a new list at the end of the process you can just add all the items from the first list to the result before the above code:
foreach (var item in firstList)
{
newList.Add(item);
}
and replace firstList with newList when adding.
You could take advantage of different types of collections to do the following:
var set = new HashSet<int>(firstList);
set.UnionWith(secondList);
var newList = new List<int>(set);
Something like this?
newList.AddRange(firstList);
newList.AddRange(secondList);
newList = newList.Distinct().ToList();
newList.Sort();

Accumulate values in chart series - WPF DevExpress

I am creating several line series for a chart control in DevExpress at run-time. The series must be created at run-time since the number of series can vary from the data query I do. Here is how I create the series:
foreach (var item in lstSPCPrintID)
{
string seriesName = Convert.ToString(item);
LineSeries2D series = new LineSeries2D();
dxcSPCDiagram.Series.Add(series);
series.DisplayName = seriesName;
var meas = from x in lstSPCChart
where x.intSPCPrintID == item
select new { x.intSPCMeas };
foreach (var item2 in meas)
{
series.Points.Add(new SeriesPoint(item2.intSPCMeas));
}
}
This happens inside a backgroundworker completed event and all the data needed is in the appropriate lists. In the test instance I am running, 6 series are created.
Each series consists of some test measurements that I need in the x-axis. These measurements can be the same value (and are the same value in a lot of cases). What I want then is for the y-axis to contain the count of how many times a measurement is for example -21. This will in the end create a curve.
Right now I create a series point for each measurement, but I do not know how to handle the ArgumentDataMember/ValueDataMember in this specific scenario. Is there a way for the chart to automatically do the counting or do I need to do it manually? Can anyone help me back on track?
I ended up doing a distinct count of the measurements before adding the series points.
foreach (var item in lstSPCPrintID)
{
string seriesName = String.Format("Position: {0}", Convert.ToString(item));
LineStackedSeries2D series = new LineStackedSeries2D();
series.ArgumentScaleType = ScaleType.Numerical;
series.DisplayName = seriesName;
series.SeriesAnimation = new Line2DUnwindAnimation();
var meas = from x in lstSPCChart
where x.intSPCPrintID == item
select new { x.dblSPCMeas };
var measDistinctCount = meas.GroupBy(x => x.dblSPCMeas).Select(group => new { Meas = group.Key, Count = group.Count() }).OrderBy(y => y.Meas);
foreach (var item2 in measDistinctCount)
{
series.Points.Add(new SeriesPoint(item2.Meas, item2.Count));
}
dxcSPCDiagram.Series.Add(series);
series.Animate();
}

less expensive way to find duplicate rows in a datatable?

I want to find all rows in a DataTable where each of a group of columns is a duplicate. My current idea is to get a list of indexes of all rows that appear more than once as follows:
public List<int> findDuplicates_New()
{
string[] duplicateCheckFields = { "Name", "City" };
List<int> duplicates = new List<int>();
List<string> rowStrs = new List<string>();
string rowStr;
//convert each datarow to a delimited string and add it to list rowStrs
foreach (DataRow dr in submissionsList.Rows)
{
rowStr = string.Empty;
foreach (DataColumn dc in submissionsList.Columns)
{
//only use the duplicateCheckFields in the string
if (duplicateCheckFields.Contains(dc.ColumnName))
{
rowStr += dr[dc].ToString() + "|";
}
}
rowStrs.Add(rowStr);
}
//count how many of each row string are in the list
//add the string's index (which will match the row's index)
//to the duplicates list if more than 1
for (int c = 0; c < rowStrs.Count; c++)
{
if (rowStrs.Count(str => str == rowStrs[c]) > 1)
{
duplicates.Add(c);
}
}
return duplicates;
}
However, this isn't very efficient: it's O(n^2) to go through the list of strings and get the count of each string. I looked at this solution but couldn't figure out how to use it with more than 1 field. I'm looking for a less expensive way to handle this problem.
Try this:
How can I check for an exact match in a table where each row has 70+ columns?
The essence is to make a set where you store hashes for rows and only do comparisons between rows with colliding hashes, complexity will be O(n)
...
If you have a large number of rows and storing the hashes themselves is an issue (an unlikely case, but still...) you can use a Bloom filter. The core idea of a Bloom filter is to calculate several different hashes of each row and use them as an address in a bitmap. As you're scanning through the rows you can double-check the rows that already have all the bits in the bitmap previously set.

Categories

Resources