Selenium C#: How to Find Element Again - c#

First time here. So I was wanting some help on a selenium testing that's got me stuck now for several hours.
So I stored a IList<IWebElement> on a page, composed of <a>. I was able to click the 1st element of that IList through a foreach, get what I need from the new page and go back to the page of the list using driver.manage.navigate.back().
However, this time, I can't click the 2nd element of the list.
Is there a way to find that 2nd element, 3rd element, 4th element, etc?
static void Main(string[] args)
{
IWebDriver driver = new FirefoxDriver();
string url = "http://DummyPageOne.Com";
driver.Navigate().GoToUrl(url);
driver.Manage().Timeouts().ImplicitlyWait(Timespan.FromSeconds(10));
PageOne page = new PageOne();
foreach (IWebElement item in page.Items)
{
item.Click();
ItemDetails details = new ItemDetails();
details.SaveImage(#"D:\Images\");
driver.Navigate().Back();
}
}
public class PageOne()
{
public PageOne()
{
PageFactory.InitElements(driver, this);
}
public IList<IWebElement> Items;
public void StoreItems()
{
string locator = "DummyLocator";
Items = driver.FindElements(By.Xpath(locator));
}
}
public class ItemDetails()
{
public ItemDetails()
{
PageFactory.InitElements(driver, this);
}
public void SaveImage()
{
}
}

On the basis of what I understood from your question, here is the way to achieve that. I checked the following C# Code on Chrome and it's working perfectly. I hope this helps.
[Test]
public void ClickAllLinks()
{
//Navigate to URL
driver.Navigate().GoToUrl(#"https://www.google.co.in/#q=Selenium");
//Store Links in IList
IList<IWebElement> resultLinks = driver.FindElements(By.CssSelector("div#ires>ol>div:not(:first-child)>div>div>h3>a"));
//Print Count
//Console.WriteLine(googleLinks.Count);
for (int i = 0; i < resultLinks.Count; i++)
{
int j = i + 1;
//Click on Link
driver.FindElement(By.CssSelector("div#ires>ol>div:not(:first-child)>div:nth-child(" + j + ")>div>h3>a")).Click();
//Print Element Locator
//Console.WriteLine("div#ires>ol>div:not(:first-child)>div:nth-child(" + j + ")>div>h3>a");
Thread.Sleep(2000); //Static wait is not recommended, add conditional wait
//Get what you need from new page
//Navigate back parent page
driver.Navigate().Back();
Thread.Sleep(2000); //Static wait is not recommended, add conditional waits
}
}
If this is not what you want please let me know.

The simple way is to initialize that list again in the end of loop.

Ok, the thing to remember about .Net and Selenium, is that the C# variables/types are references to things that exist in memory (on screen) in the browser.
When the browser window navigates away, the reference held by those variables will be invalid. You must 're-find' them.
I would be very surprised if you could click the first item in the list either.
The only way to click on these objects, is to reacquire the object after the page has reloaded.
Eg: Driver.FindElement(By.XPath("\\div[#class='somelocator']"));
If you can, I would recommend constructing a collection of 'By' classes.
Then loop through the By classes using them in the Find method.
That way, each iteration of the loop, you will be acquiring the object fresh.

Related

How to right click on Selenium with all UI grid rows selected?

I need to keep both UI Grid rows selected in order for a right click customised menu to show up. If only one row is selected a different context menu is displayed so I need to keep both rows selected.
Any ideas how to do this?
Code below is what I have tried. Trouble is with this code, even though both rows are selected, the right click only happens for one row (effectively de-selecting one of the rows)
See code below
class Program
{
static void Main(string[] args)
{
IWebElement tableElement;
String _address = "https://datatables.net/examples/api/select_row.html";
IWebDriver _driver = new ChromeDriver();
_driver.Navigate().GoToUrl(_address);
tableElement = _driver.FindElement(By.Id("example"));
Actions actions = new Actions(_driver);
var noRows = _driver.FindElements(By.XPath("//table[#id='example']/tbody/tr"));
for (int i = 0; i < 2; i++)
{
noRows[0].Click();
actions.KeyDown(Keys.Control).Click(noRows[1]).KeyUp(Keys.Control).Perform();
actions.ContextClick(noRows).Perform();
}
}
}

Locating an Element within an object design with javascript

Below is the code I was using to at least select an element from the object. The image attached shows what the object look like on the web page. There is no unique name for me to use in selecting the object. With this code, I was able to see the seat count, but applying the click action on any of the seats did not work. Any idea in the right direction would be appreciated.
public static void SeatSelection()
{
IList<IWebElement> seats = Driver.FindElements(By.TagName("circle"));
foreach (IWebElement seat in seats)
{
if (seat.Displayed && seat.Enabled == true)
{
seat.Click();
}
else
{
Console.WriteLine("Tickets Sold Out");
}
}
}

how to break out of a loop when a page is reloading

I have this code below where it goes through all of the links in a panel and once it is able find the relevant link, it will click it. The problem I have though is that when I select the link, the page reloads before it shows a new page. So I thought of adding a break to break out of the loop because if I don't add this, then it will give me a element is not attached to document error. I believe this error occurs because it is still trying to loop through the links whilst the page is reloading.
However, when adding the break and running the test, nothing happens as in it doesn't select a link. How can I break out of the loop after clicking the link?
I don't need to go back to the loop after the page has reloaded, I simply want to break out of the loop because the loop has done its job in finding the link and clicking.
public void SelectHomepageSearchPanelLink(string linkText)
{
var searchPanelLinks = _driver.FindElements(HomepageResponsiveElements.HomepageSearchPanelLinks);
foreach (var searchPanelLink in searchPanelLinks)
{
if (searchPanelLink.Text == linkText)
{
searchPanelLink.Click();
break;
}
else
{
throw new Exception($"{linkText} link not found by the responsive homepage search panel");
}
}
}
Try this:
public void SelectHomepageSearchPanelLink(string linkText)
{
var searchPanelLinks = _driver.FindElements(HomepageResponsiveElements.HomepageSearchPanelLinks);
var linkToClick;
foreach (var searchPanelLink in searchPanelLinks)
{
if (searchPanelLink.Text == linkText)
{
linkToClick = searchPanelLink;
break;
}
else
{
throw new Exception($"{linkText} link not found by the responsive homepage search panel");
}
}
linkToClick.Click();
}

How to scroll to element with Selenium WebDriver

How do I get Selenium WebDriver to scroll to a particular element to get it on the screen. I have tried a lot of different options but have had no luck.
Does this not work in the C# bindings?
I can make it jump to a particular location ex
((IJavaScriptExecutor)Driver).ExecuteScript("window.scrollTo(0, document.body.scrollHeight - 150)");
But I want to be able to send it to different elements without giving the exact location each time.
public IWebElement Example { get { return Driver.FindElement(By.Id("123456")); } }
Ex 1)
((IJavaScriptExecutor)Driver).ExecuteScript("arguments[0].scrollIntoView(true);", Example);
Ex 2)
((IJavaScriptExecutor)Driver).ExecuteScript("window.scrollBy(Example.Location.X", "Example.Location.Y - 100)");
When I watch it, it does not jump down the page to the element, and the exception matches the element being off screen.
I added an bool ex = Example.Exists(); after it and checked the results.
It does Exist (its true).
Its not Displayed (as its still offscreen as it has not moved to the element)
Its not Selected ??????
Someone is seeing success By.ClassName.
Does anyone know if there is a problem with doing this By.Id in the C# bindings?
Its little older question, but I believe that there is better solution than suggested above.
Here is original answer: https://stackoverflow.com/a/26461431/1221512
You should use Actions class to perform scrolling to element.
var element = driver.FindElement(By.id("element-id"));
Actions actions = new Actions(driver);
actions.MoveToElement(element);
actions.Perform();
This works for me in Chrome, IE8 & IE11:
public void ScrollTo(int xPosition = 0, int yPosition = 0)
{
var js = String.Format("window.scrollTo({0}, {1})", xPosition, yPosition);
JavaScriptExecutor.ExecuteScript(js);
}
public IWebElement ScrollToView(By selector)
{
var element = WebDriver.FindElement(selector);
ScrollToView(element);
return element;
}
public void ScrollToView(IWebElement element)
{
if (element.Location.Y > 200)
{
ScrollTo(0, element.Location.Y - 100); // Make sure element is in the view but below the top navigation pane
}
}
This works for me:
var elem = driver.FindElement(By.ClassName("something"));
driver.ExecuteScript("arguments[0].scrollIntoView(true);", elem);
This works for me in C# automation:
public Page scrollUp()
{
IWebElement s = driver.FindElement(By.Id("your_locator")); ;
IJavaScriptExecutor je = (IJavaScriptExecutor)driver;
je.ExecuteScript("arguments[0].scrollIntoView(false);", s);
return this;
}
I created a extension for IWebDriver:
public static IWebElement GetElementAndScrollTo(this IWebDriver driver, By by)
{
var js = (IJavaScriptExecutor)driver;
try
{
var element = driver.FindElement(by);
if (element.Location.Y > 200)
{
js.ExecuteScript($"window.scrollTo({0}, {element.Location.Y - 200 })");
}
return element;
}
catch (Exception ex)
{
return null;
}
}
For scroll down inside the page here I have small code and solution
My Scenario was until I scroll down the page. Accept and Don't accept button was not getting enabled. I was having 15 terms and conditions from which I needed to select 15th term and condition by inspecting webpage and taking the Id of last terms and condition paragraph.
driver.FindElement(By.Id("para15")).Click();
<div id="para15">One way Non-Disclosure Agreement</div>
I had somehow same problem. I was working on a web page and need to click on a button on a child window which by default, was located below screen.
This is the code I used and it worked.
Actually I just simulated a mouse drag and drop and moved the window 250 points upwards so that the button was in the screen.
Actions action = new Actions(driver);
action.DragAndDropToOffset(driver.FindElement(By.XPath("put an element path which **is in the screen now**, such as a label")), 0, -250);
action.Build().Perform();
If the reason we put time is for a long time to load the page, we put it. Just it.
ChromeOptions options = new ChromeOptions();
var driver = new ChromeDriver(options);
driver.Navigate().GoToUrl("https://www.w3schools.com/");
Thread.Sleep(5000);
driver.ExecuteScript("scroll(0,400)");
HtmlDocument countriesDocument = new HtmlDocument();
countriesDocument.LoadHtml(driver.PageSource);
I am providing solution to scroll within a specific element, like a scrollable table.
// Driver is the Selenium IWebDriver
IJavaScriptExecutor exec = (IJavaScriptExecutor) Driver;
int horizontalScroll= direction == Direction.Right ? X : 0;
int verticalScroll = direction == Direction.Down ? Y : 0;
exec.ExecuteScript(
"arguments[0].scrollBy(arguments[1], arguments[2])"
, Self
, horizontalScroll
, verticalScroll);
Actions actions = new Actions(driver);
actions.SendKeys(Keys.PageDown).Build().Perform();
You do it through for, it works like clockwork, simple, but not always convenient
var js = (IJavaScriptExecutor)driver;
js.ExecuteScript("arguments[0].scrollIntoView({behavior: 'smooth', block: 'center'})", PutYourElementIDHere);
var e = driver.FindElement(By.XPath("//*[text()='Timesheet']"));
// JavaScript Executor to scroll to element
((IJavaScriptExecutor)driver).ExecuteScript("arguments[0].scrollIntoView(true);", e);

Read Source of Page that completes itself after reaching end of page

I wanted to write a Page parser for VK.com. My Problem is, that the page source contains only 50 Results, and the others are reloaded after reaching the end of the Page.
My Code until now:
private void syncToolStripMenuItem_Click(object sender, EventArgs e)
{
string[] information, title, artist;
int i = 0;
List<string> joint = new List<string>();
information = info_basic(webBrowser1.DocumentText);
title = info_title(information);
artist = info_artist(information);
foreach (string str in title)
{
joint.Add(artist[i] + " - " + title[i]);
i++;
}
listBox1.Items.Clear();
listBox1.Items.AddRange(joint.ToArray());
}
private string[] info_basic(string source)
{
string[] temps;
List<string> sub = new List<string>();
temps = Regex.Split(source, "<div class=\"play_btn fl_l\">");
foreach (string str in temps)
{
sub.Add(str);
}
sub.RemoveRange(0, 1);
return sub.ToArray();
}
Important Code of Page:
http://csharp.bplaced.net/files/vk%20source.txt
I recommend to monitor traffic from page to vk.com when you scroll to the bottom
(for example, using fiddler http proxy), and find out how page is dynamically loaded.
Most probably this is done through ajax async calls from javascript.
And then, simulate same behavior in code to load entire page. HttpWebRequest class would work best for this task.
But since you are using webBrowser control, and probably it does all the work for loading the content - you can try to programmatically scroll the web browser control view, so that native js would fire and load content, stop when you reach the bottom, and then parse entire loaded page.

Categories

Resources