c# selenium form submission - c#

I'm having issues with if submit form is available then submit data and check for response, it seems to check for submit form, submit the data but then doesn't process the response given, example of code:
if (driver.FindElements(By.Name("search")).Count > 0 && driver.FindElement(By.Name("search")).Displayed)
{
driver.FindElement(By.Name("search")).SendKeys(query + Keys.Enter);
if (driver.FindElements(By.XPath("//*[#id='not found']/h2")).Count > 0 && driver.FindElement(By.XPath("//*[#id='not found']/h2")).Displayed)
{
Console.WriteLine("search not found");
driver.Manage().Cookies.DeleteAllCookies();
driver.Navigate().GoToUrl("https://example.com");
}
}
what this should doing is:
if
driver.findelement(by.name("search")
is true, then
driver.findelement(by.name("search").sendkeys(query)
then, check for response provided and handle using given commands within the if statement.

I would rewrite this a little to make it a little more readable and not hit the page so many times. Every time you do driver.findElement(), Selenium scrapes the page. Scrape it once, do all your analysis using that first scrape, and then proceed.
IReadOnlyCollection<IWebElement> search = GetVisibleElements(By.Name("search"));
if (search.Any())
{
search.ElementAt(0).SendKeys(query + Keys.Enter);
if (GetVisibleElements(By.XPath("//*[#id='not found']/h2")).Any())
{
// search not found
Console.WriteLine("search not found");
Driver.Manage().Cookies.DeleteAllCookies();
Driver.Navigate().GoToUrl("https://example.com");
}
else
{
// search found
// do stuff here
}
}
Since you are checking more than once if an element exists and is visible, I would wrap that code in a function to make it more usable and make your code easier to read.
public IReadOnlyCollection<IWebElement> GetVisibleElements(By locator)
{
return Driver.FindElements(locator).Where(e => e.Displayed).ToList();
}
This function locates the elements based on the locator provided, filters it down to only those elements that are displayed, and then returns the list. You can then see if there are any elements in the returned list in your script.

Related

Element not interactable with C# app that uses Chrome WebDriver

PREFACE: After a lengthy Stack Overflow search I found two suggested solutions to solve the "element not interactable" problem I am having when I try to interact with the target node element. Neither of them worked, as described below.
I have a C# app that uses the OpenQA.Selenium package to remote control a YouTube web page. I am trying to click on a button on the page that opens a dialog box, but when I do I get the notorious "element not interactable" message. I found the following two suggestions on Stack Overflow:
Actions actions = new Actions(chromeDriver);
actions.MoveToElement(webElem);
actions.Perform();
And this suggestion that one commenter said is ill-advised because it can click on elements that are not visible or are below modal objects:
IJavaScriptExecutor executor = (IJavaScriptExecutor)chromeDriver;
executor.ExecuteScript("arguments[0].click();", webElem);
I tried the second one anyways to see if it worked. Unfortunately, with the first suggestion that uses the Actions interface, I still got "element not interactable" message but this time on the Perform() statement. The third attempt did not get the error message but it failed to click the button. I know this because clicking the button opens a dialog window when it works, and no dialog window appeared when I tried the third solution.
Below is the code I am using to try and click on the element. The collection it iterates are the elements I select via an XPath statement that finds the button I am want to click. It tries every button that matches the XPath statement and skips those that fail to work. Unfortunately, none of the 3 buttons found by the XPath statement work.
What is strange is that if I take the exact same XPath statement I am using in my C# app and plug it into the Chrome DevTools debugger, referencing the first element in the array of found elements, it works:
$x(strXPath)[0].click()
But so far nothing I have tried from C# app works. Does anyone have an idea on why I am having this problem?
public IWebElement ClickFirstInteractable(ChromeDriver chromeDriver)
{
string errPrefix = "(ClickFirstInteractable) ";
if (this.DOM_WebElemensFound == null || this.DOM_WebElemensFound.Count() < 1)
throw new NullReferenceException(errPrefix + "The DOM_WebElementsFound collection is empty.");
IWebElement webElemClicked = null;
foreach (IWebElement webElem in this.DOM_WebElemensFound)
{
// Try and "click" it.
try
{
// First make sure the element is visible, or we will get
// the "element not interactable" error.
/* FIRST ATTEMPT, didn't work.
*
webElem.scrollIntoView(true);
webElem.Click(); // <<<<<----- Error occurs here
*/
/* SECOND ATTEMPT using Actions, didn't work
* and I go the error message when the Perform() statement executes.
Actions actions = new Actions(chromeDriver);
actions.MoveToElement(webElem);
actions.Perform(); // <<<<<----- Error occurs here
*/
/* THIRD ATTEMPT using script execution, didn't work.
* I did not get the error message, but the button did not get clicked.
*/
IJavaScriptExecutor executor = (IJavaScriptExecutor)chromeDriver;
executor.ExecuteScript("arguments[0].scrollIntoView();", webElem);
executor.ExecuteScript("arguments[0].click();", webElem);
// Click operation accepted. Stop iteration.
webElemClicked = webElem;
break;
}
catch (ElementNotInteractableException exc)
{
// Swallow this exception and go on to the next element found by the XPath expression.
System.Console.WriteLine(exc.Message);
}
}
return webElemClicked;
}
I tried to reproduce your scenario by clicking on a "hidden" button, waiting for the modal to appear, then acting on that modal, etc.
I hope it helps you!
const string Target = #"https://www.youtube.com/";
using var driver = new ChromeDriver();
var wait = new WebDriverWait(driver, TimeSpan.FromSeconds(20))
{
PollingInterval = TimeSpan.FromMilliseconds(250),
};
driver.Navigate().GoToUrl(Target);
// i don't consent cookies to
// save time, so just do it
// here manually and then press enter to console
Console.ReadLine();
var menuLocator = By.XPath("//a[#id = 'video-title-link'][1]" +
"/ancestor::div[#id = 'meta']" +
"/following-sibling::div[#id = 'menu']" +
"//button[#class = 'style-scope yt-icon-button']");
var menu = wait.Until(d => d.FindElement(menuLocator));
var actions = new Actions(driver);
actions.MoveToElement(menu).Click().Perform();
var shareLocator = By.XPath("//div[#id = 'contentWrapper']//*[normalize-space(text()) = 'Share']");
var share = wait.Until(d => d.FindElement(shareLocator));
actions.MoveToElement(share).Click().Perform();
var copyLinkLocator = By.XPath("//button[#aria-label = 'Copy']");
var copyLink = wait.Until(d => d.FindElement(copyLinkLocator));
actions.MoveToElement(copyLink).Click().Perform();

Scraping html list data from a dynamic server

Hallo guys!
Sorry for the dump question, this is my last resort. I swear i triend countless of other Stackoverflow questions, different Frameworks, etc., but those didnt seem to help.
Ich have the following Problem:
A website displays a list of data (there is a TON of div, li, span etc. tags infront, its a big HTML.)
Im writing a tool that fetches data from a specific list inside a ton of other div tags, downloads it and outputs an excel file.
The website im trying to access, is dynamic. So you open the website, it loads a little bit, and then the list appears (probably some JS and stuff).
When i try to download the website via a webRequest in C#, the html I get ist almost empty with a ton on white spaces, lots of non-html stuff, some garbage data as well.
Now: Im pretty used to C#, HTMLAgillityPack, and countless other libraries, not so much in web related stuff tho. I tried CefSharp, Chromium etc. all of those stuff, but couldnt get them to work properly unfortunately.
I want to have a HTML in my program to work with that looks exactly like the HTML that you see when
you open the dev console in chrome wenn visting the website mentined above.
The HTML parser works flwalessly there.
This is how I image how the code could look like simplified.
Extreme C# pseudocode:
WebBrowserEngine web = new WebBrowserEngine()
web.LoadURLuntilFinished(url); // with all the JS executed and stuff
String html = web.getHTML();
web.close();
My Goal would be that the string html in the pseudocode looks exactly like the one in the Chrome dev tab.
Maybe there is a solution posted somewhere else but i swear i coudlnt find it, been looking for days.
Andy help is greatly appreciated.
#SpencerBench is spot on in saying
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically.
To answer the question for your specific use case, we need to understand the behaviour of the page you want to scrape data from, or as I asked in the comments, how do you know the page is "finished"?
However, it's possible to give a fairly generic answer to the question which should act as a starting point for you.
This answer uses Selenium, a package which is commonly used for automating testing of web UIs, but as they say on their home page, that's not the only thing it can be used for.
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.
The web site I'm scraping
So first we need a web site. I've created one using ASP.net core MVC with .net core 3.1, although the web site's technology stack isn't important, it's the behaviour of the page you want to scrape which is important. This site has 2 pages, unimaginatively called Page1 and Page2.
Page controllers
There's nothing special in these controllers:
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page1Controller : Controller
{
public IActionResult Index()
{
return View("Page1");
}
}
}
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page2Controller : Controller
{
public IActionResult Index()
{
return View("Page2");
}
}
}
API controller
There's also an API controller (i.e. it returns data rather than a view) which the views can call asynchronously to get some data to display. This one just creates an array of the requested number of random strings.
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Text;
[Route("api/[controller]")]
[ApiController]
public class DataController : ControllerBase
{
[HttpGet("Create")]
public IActionResult Create(int numberOfElements)
{
var response = new List<string>();
for (var i = 0; i < numberOfElements; i++)
{
response.Add(RandomString(10));
}
return Ok(response);
}
private string RandomString(int length)
{
var sb = new StringBuilder();
var random = new Random();
for (var i = 0; i < length; i++)
{
var characterCode = random.Next(65, 90); // A-Z
sb.Append((char)characterCode);
}
return sb.ToString();
}
}
}
Views
Page1's view looks like this:
#{
ViewData["Title"] = "Page 1";
}
<div class="text-center">
<div id="list" />
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
$(document).ready(function () {
$('#list').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=20000',
datatype: 'json',
success: function (data) {
$('#loading').remove();
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
insert = '<ul id="results">' + insert + '</ul>';
$('#list').html(insert);
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
});
</script>
</div>
So when the page first loads, it just contains an empty div called list, however the page loading trigger's the function passed to jQuery's $(document).ready function, which makes an asynchronous call to the API controller, requesting an array of 20,000 elements. While the call is in progress, "Loading..." is displayed on the screen, and when the call returns, this is replaced by an unordered list containing the received data. This is written in a way intended to be friendly to developers of automated UI tests, or of screen scrapers, because we can tell whether all the data has loaded by testing whether or not the page contains an element with the ID results.
Page2's view looks like this:
#{
ViewData["Title"] = "Page 2";
}
<div class="text-center">
<div id="list">
<ul id="results" />
</div>
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
var requestCount = 0;
var maxRequests = 20;
$(document).ready(function () {
getData();
});
function getDataIfAtBottomOfPage() {
console.log("scroll - " + requestCount + " requests");
if (requestCount < maxRequests) {
console.log("scrollTop " + document.documentElement.scrollTop + " scrollHeight " + document.documentElement.scrollHeight);
if (document.documentElement.scrollTop > (document.documentElement.scrollHeight - window.innerHeight - 100)) {
getData();
}
}
}
function getData() {
window.onscroll = undefined;
requestCount++;
$('results2').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=50',
datatype: 'json',
success: function (data) {
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
$('#loading').remove();
$('#results').append(insert);
if (requestCount < maxRequests) {
window.setTimeout(function () { window.onscroll = getDataIfAtBottomOfPage }, 1000);
} else {
$('#results').append('<li>That\'s all folks');
}
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
}
</script>
</div>
This gives a nicer user experience because it requests data from the API controller in multiple smaller chunks, so the first chunk of data appears fairly quickly, and once the user has scrolled down to somewhere near the bottom of the page, the next chunk of data is requested, until 20 chunks have been requested and displayed, at which point the text "That's all folks" is added to the end of the unordered list. However this is more difficult to interact with programmatically because you need to scroll the page down to make the new data appear.
(Yes, this implementation is a bit buggy - if the user gets to the bottom of the page too quickly then requesting the next chunk of data doesn't happen until they scroll up a bit. But the question isn't about how to implement this behaviour in a web page, but about how to scrape the displayed data, so please forgive my bugs.)
The scraper
I've implemented the scraper as a xUnit unit test project, just because I'm not doing anything with the data I've scraped from the web site other than Asserting that it is of the correct length, and therefore proving that I haven't prematurely assumed that the web page I'm scraping from is "finished". You can put most of this code (other than the Asserts) into any type of project.
Having created your scraper project, you need to add the Selenium.WebDriver and Selenium.WebDriver.ChromeDriver nuget packages.
Page Object Model
I'm using the Page Object Model pattern to provide a layer of abstraction between functional interaction with the page and the implementation detail of how to code that interaction. Each of the pages in the web site has a corresponding page model class for interacting with that page.
First, a base class with some code which is common to more than one page model class.
namespace StackOverflow68925623Scraper
{
using System;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
public class PageModel
{
protected PageModel(IWebDriver driver)
{
this.Driver = driver;
}
protected IWebDriver Driver { get; }
public void ScrollToTop()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, 0)");
}
public void ScrollToBottom()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, document.body.scrollHeight)");
}
protected IWebElement GetById(string id)
{
try
{
return this.Driver.FindElement(By.Id(id));
}
catch (NoSuchElementException)
{
return null;
}
}
protected IWebElement AwaitGetById(string id)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(10));
return wait.Until(e => e.FindElement(By.Id(id)));
}
}
}
This base class gives us 4 convenience methods:
Scroll to the top of the page
Scroll to the bottom of the page
Get the element with the supplied ID, or return null if it doesn't exist
Get the element with the supplied ID, or wait for up to 10 seconds for it to appear if it doesn't exist yet
And each page in the web site has its own model class, derived from that base class.
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page1Model : PageModel
{
public Page1Model(IWebDriver driver) : base(driver)
{
}
public IWebElement AwaitResults => this.AwaitGetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page1");
}
}
}
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page2Model : PageModel
{
public Page2Model(IWebDriver driver) : base(driver)
{
}
public IWebElement Results => this.GetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page2");
}
}
}
And the Scraper class:
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium.Chrome;
using System;
using System.Threading;
using Xunit;
public class Scraper
{
[Fact]
public void TestPage1()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
try
{
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
// Assert
Assert.Equal(20000, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
[Fact]
public void TestPage2()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page2Model(driver);
page.Navigate();
try
{
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
}
}
So, what's happening here?
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
ChromeDriver is in the Selenium.WebDriver.ChromeDriver package and implements the IWebDriver interface from the Selenium.WebDriver package with the code to interact with the Chrome browser. Other packages are available containing implementations for all popular browsers. Instantiating the driver object opens a browser window, and calling its Navigate method directs the browser to the page we want to test/scrape.
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
Because on Page1, the results element doesn't exist until all the data has been displayed, and no user interaction is required in order for it to be displayed, we use the page model's AwaitResults property to just wait for that element to appear and return it once it has appeared.
AwaitResults returns an IWebElement instance representing the element, which in turn has various methods and properties we can use to interact with the element. In this case we use its Text property which returns the element's contents as a string, without any markup. Because the data is displayed as an unordered list, each element in the list is delimited by a line break, so we can can use String's Split method to convert it to a string array.
Page2 needs a different approach - we can't use the presence of the results element to determine whether the data has all been displayed, because that element is on the page right from the start, instead we need to check for the string "That's all folks" which is written right at the end of the last chunk of data. Also the data isn't loaded all in one go, and we need to keep scrolling down in order to trigger the loading of the next chunk of data.
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
Because of the bug in the UI that I mentioned earlier, if we get to the bottom of the page too quickly, the fetch of the next chunk of data isn't triggered, and attempting to scroll down when already at the bottom of the page doesn't raise another scroll event. That's why I'm scrolling to the bottom of the page and then back to the top - that way I can guarantee that a scroll event is raised. You never know, the web site you're trying to scrape data from may itself be buggy.
Once the "That's all folks" text has appeared, we can go ahead and get the results element's Text property and convert it to a string array as before.
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
This is the bit that won't be in your code. Because I'm scraping a web site which is under my control, I know exactly how much data it should be displaying so I can check that I've got all the data, and therefore that my scraping code is working correctly.
Further reading
Absolute beginner's introduction to Selenium: https://www.guru99.com/selenium-csharp-tutorial.html
(A curiosity in that article is the way that it starts by creating a console application project and later changes its output type to class library and manually adds the unit test packages, when the project could have been created using one of Visual Studio's unit test project templates. It gets to the right place in the end, albeit via a rather odd route.)
Selenium documentation: https://www.selenium.dev/documentation/
Happy scraping!
If you need to fully execute the web page, then a complete browser like CefSharp is your only option.
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically. I know that CefSharp can simulate user actions like clicking, scrolling, etc.

How to prevent "stale element" inside a foreach loop?

I'm using Selenium for retrieve data from this site, and I encountered a little problem when I try to click an element within a foreach.
What I'm trying to do
I'm trying to get the table associated to a specific category of odds, in the link above we have different categories:
As you can see from the image, I clicked on Asian handicap -1.75 and the site has generated a table through javascript, so inside my code I'm trying to get that table finding the corresponding element and clicking it.
Code
Actually I have two methods, the first called GetAsianHandicap which iterate over all categories of odds:
public List<T> GetAsianHandicap(Uri fixtureLink)
{
//Contains all the categories displayed on the page
string[] categories = new string[] { "-1.75", "-1.5", "-1.25", "-1", "-0.75", "-0.5", "-0.25", "0", "+0.25", "+0.5", "+0.75", "+1", "+1.25", "+1.5", "+1.75" };
foreach(string cat in categories)
{
//Get the html of the table for the current category
string html = GetSelector("Asian handicap " + asian);
if(html == string.Empty)
continue;
//other code
}
}
and then the method GetSelector which click on the searched element, this is the design:
public string GetSelector(string selector)
{
//Get the available table container (the category).
var containers = driver.FindElements(By.XPath("//div[#class='table-container']"));
//Store the html to return.
string html = string.Empty;
foreach (IWebElement container in containers)
{
//Container not available for click.
if (container.GetAttribute("style") == "display: none;")
continue;
//Get container header (contains the description).
IWebElement header = container.FindElement(By.XPath(".//div[starts-with(#class, 'table-header')]"));
//Store the table description.
string description = header.FindElement(By.TagName("a")).Text;
//The container contains the searched category
if (description.Trim() == selector)
{
//Get the available links.
var listItems = driver.FindElement(By.Id("odds-data-table")).FindElements(By.TagName("a"));
//Get the element to click.
IWebElement element = listItems.Where(li => li.Text == selector).FirstOrDefault();
//The element exist
if (element != null)
{
//Click on the container for load the table.
element.Click();
//Wait few seconds on ChromeDriver for table loading.
driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(20);
//Get the new html of the page
html = driver.PageSource;
}
return html;
}
return string.Empty;
}
Problem and exception details
When the foreach reach this line:
var listItems = driver.FindElement(By.Id("odds-data-table")).FindElements(By.TagName("a"));
I get this exception:
'OpenQA.Selenium.StaleElementReferenceException' in WebDriver.dll
stale element reference: element is not attached to the page document
Searching for the error means that the html page source was changed, but in this case I store the element to click in a variable and the html itself in another variable, so I can't get rid to patch this issue.
Someone could help me?
Thanks in advance.
I looked at your code and I think you're making it more complicated than it needs to be. I'm assuming you want to scrape the table that is exposed when you click one of the handicap links. Here's some simple code to do this. It dumps the text of the elements which ends up unformatted but you can use this as a starting point and add functionality if you want. I didn't run into any StaleElementExceptions when running this code and I never saw the page refresh so I'm not sure what other people were seeing.
string url = "http://www.oddsportal.com/soccer/europe/champions-league/paok-spartak-moscow-pIXFEt8o/#ah;2";
driver.Url = url;
// get all the (visible) handicap links and click them to open the page and display the table with odds
IReadOnlyCollection<IWebElement> links = driver.FindElements(By.XPath("//a[contains(.,'Asian handicap')]")).Where(e => e.Displayed).ToList();
foreach (var link in links)
{
link.Click();
}
// print all the odds tables
foreach (var item in driver.FindElements(By.XPath("//div[#class='table-container']")))
{
Console.WriteLine(item.Text);
Console.WriteLine("====================================");
}
I would suggest that you spend some more time learning locators. Locators are very powerful and can save you having to stack nested loops looking for one thing... and then children of that thing... and then children of that thing... and so on. The right locator can find all that in one scrape of the page which saves a lot of code and time.
As you mentioned in related Post, this issue is because site executes an auto refresh.
Solution 1:
I would suggest if there is an explicit way to do refresh, perform that refresh on a periodic basis, or (if you are sure, when you need to do refresh).
Solution 2:
Create a Extension method for FindElement and FindElements, so that it try to get element for a given timeout.
public static void FindElement(this IWebDriver driver, By by, int timeout)
{
if(timeout >0)
{
return new WebDriverWait(driver, TimeSpan.FromSeconds(timeout)).Until(ExpectedConditions.ElementToBeClickable(by));
}
return driver.FindElement(by);
}
public static IReadOnlyCollection<IWebElement> FindElements(this IWebDriver driver, By by, int timeout)
{
if(timeout >0)
{
return new WebDriverWait(driver, TimeSpan.FromSeconds(timeout)).Until(ExpectedConditions.PresenceOfAllElementsLocatedBy(by));
}
return driver.FindElements(by);
}
so your code will use these like this:
var listItems = driver.FindElement(By.Id("odds-data-table"), 30).FindElements(By.TagName("a"),30);
Solution 3:
Handle StaleElementException using an Extension Method:
public static void FindElement(this IWebDriver driver, By by, int maxAttempt)
{
for(int attempt =0; attempt <maxAttempt; attempt++)
{
try
{
driver.FindElement(by);
break;
}
catch(StaleElementException)
{
}
}
}
public static IReadOnlyCollection<IWebElement> FindElements(this IWebDriver driver, By by, int maxAttempt)
{
for(int attempt =0; attempt <maxAttempt; attempt++)
{
try
{
driver.FindElements(by);
break;
}
catch(StaleElementException)
{
}
}
}
Your code will use these like this:
var listItems = driver.FindElement(By.Id("odds-data-table"), 2).FindElements(By.TagName("a"),2);
Use this:
string description = header.FindElement(By.XPath("strong/a")).Text;
instead of your:
string description = header.FindElement(By.TagName("a")).Text;

Selenium c# automated test

I have made automated test with selenium c# and have a probelm. My test writes some info in form and then submits, if after submiting div that contains some info has info "Formoje yra klaidu", it must write to file email from form, but the problem is that this div is not visible when email isn't wrong and my test just stops on place where Iwebelement finds element by xpath because the element isn't visible. Here's some of the code
for (int i = 0; i < array.Length; i++)
{
IWebElement PasirinktiParkinga = driver.FindElement(By.CssSelector("#zone_16 > td:nth-child(5) > a:nth-child(1)"));
PasirinktiParkinga.Click();
IWebElement Vardas = driver.FindElement(By.Id("firstname1"));
Vardas.Clear();
Vardas.SendKeys("Vardas");
IWebElement Pavarde = driver.FindElement(By.Id("lastname1"));
Pavarde.Clear();
Pavarde.SendKeys("Pavarde");
IWebElement AutoNumeris = driver.FindElement(By.Id("vehicle_number1"));
AutoNumeris.Clear();
AutoNumeris.SendKeys("ASD123");
IWebElement Pastas = driver.FindElement(By.Id("email1"));
Pastas.Clear();
Pastas.SendKeys(array[i]);
IWebElement Taisykles = driver.FindElement(By.CssSelector("div.checks:nth-child(5) > div:nth-child(1) > label:nth-child(2)"));
Taisykles.Click();
IWebElement uzsakyti = driver.FindElement(By.CssSelector(".submit-zone > input:nth-child(1)"));
uzsakyti.Click();
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(5));
IWebElement MessageRed = driver.FindElement(By.XPath("//*[#id='step_2']/div[3]")); //This line is were i wan't to find this div but i must write it so that if there isn't there - just do the for cicle
if (MessageRed.Text.Contains("Formoje yra klaidų."))
{
failure += array[i] + "\n";
System.IO.File.WriteAllText(#"C:\Users\jarek\Desktop\Failureemail\failure.txt", failure);
}
IWebElement unipark = driver.FindElement(By.CssSelector(".logo > a:nth-child(1)"));
unipark.Click();
i++;
}
How to make that if this element isn't there, code don't stop.
Can any body help me ???
Well, first of all don't use any Thread.Sleeps at all. Use Implicit and Explicit waits instead. Secondly, try to not use xpath (very difficult to maintain, understand it). And if you need to verify elements existance you can do it in next way e.g.
var elements = driver.FindElements(By.XPath("//*[#id='step_2']/div[3]"));
if(elements.Count() > 0)
// do everything you want
else
//continue doing smth
or you can try catch ElementNotFound exception... it's all depends.
You should check to see if the element exists, in this case check and see if the size of the element is greater than 0. This is how I could do it in Java:
if (driver.FindElement(By.XPath("//*[#id='step_2']/div[3]")).size() > 0)
{
//perform your action now
}
else
{
//perform action if the element is not present
}
I did it like this and it worked
if (driver.FindElements(By.XPath("//*[#id='step_2']/div[3]")).Count != 0)
Be carefull with FindElements, the test can be very long to execute if you have huge pages.
When I must use the FindElements to search an element, I use a FindElement that can help me to scope where I must find the researched element with FindElements. In my case, my executing time is reduced of 2 seconds everytime I use directly FindElements
Use an Implicit Wait. This allows you to enter a value in seconds that webdriver will wait for an element if it isn't found initially. This example is set for 2 seconds.
driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(2)
You could also use a try{} catch{}.
Also if you want to clean up your code you could write functions for finding elements and then just pass that id name into the function. It will make things a lot clearer and easier to read.
Here is my method for finding an element by ID
static void ClickElement_ByID(string elementName)
{
try
{
IWebElement test = driver.FindElement(By.Id(""+elementName+""));
Console.WriteLine("Found: "+elementName);
test.Click();
}
catch (Exception e)
{
Console.WriteLine(e);
}

On a web performance test, can I specify multiple expected response pages?

When dealing with a coded web performance test (in C#), is it possible to tell the web test to expect multiple valid response pages? We have certain criteria that happen on login, and the user may be taken to a few different pages depending on some flags, so expecting a single response URL isn't really possible.
Can't you simply use the extract rules to extract something from each page you could get redirected to?
Here you can find some guidance of how to set things up:
http://www.dotnetfunda.com/articles/show/901/web-performance-test-using-visual-studio-part-i
or if this doesn't work for you, you could also code your custom validation rule:
http://msdn.microsoft.com/en-us/library/ms182556.aspx
On a Coded UI test of a web page that could return either of two quite different pages I wrote the following code. It worked fine for that test, there are several possible tidy ups that I would investigate if I needed similar again. So please consider this as a starting point.
The basic idea is to see check the current web page for text identifying which of the expected pages is currently shown. If found then deal with that page. If not found then pause for a short time to allow the page to load, then look again. Added in a time out in case an expected page never appears.
public void LookForResultPages()
{
Int32 maxMilliSecondsToWait = 3 * 60 * 1000;
bool processedPage = false;
do
{
if ( CountProperties("InnerText", "Some text on most common page") > 0 )
{
... process that page;
processedPage = true;
}
else if ( CountProperties("InnerText", "Some text on another page") > 0 )
{
... process that page;
processedPage = true;
}
else
{
const Int32 pauseTime = 500;
Playback.Wait(pauseTime); // In milliseconds
maxMilliSecondsToWait -= pauseTime;
}
} while ( maxMilliSecondsToWait > 0 && !processedPage );
if ( !processedPage )
{
... handle timeout;
}
}
public int CountProperties(string propertyName, string propertyValue)
{
HtmlControl html = new HtmlControl(this.myBrowser);
UITestControlCollection htmlcol = new UITestControlCollection();
html.SearchProperties.Add(propertyName, propertyValue, PropertyExpressionOperator.Contains);
htmlcol = html.FindMatchingControls();
return htmlcol.Count;
}

Categories

Resources