All is working well when I was using Selenium alone, but when i tried with phantomjs i get null in finding elements.
static void Main()
{
IWebDriver driver = new PhantomJSDriver();
driver.Navigate().GoToUrl("https://sellercentral.amazon.de/gp/homepage.html");
var username = driver.FindElement(By.Id("username"));
var password = driver.FindElement(By.Id("password"));
username.SendKeys("*************************");
password.SendKeys("*************");
driver.FindElement(By.Id("sign-in-button")).Submit();
string messagesURL = "https://sellercentral.amazon.de/gp/communication-manager/inbox.html/ref=ag_cmin__cmin?ie=UTF8&clcmResponseTimeSuboptions=&dateExactEnd=&dateExactStart=&dateFilter=&itemsPerPage=20&marketplaceId=A1PA6795UKMFR9&otherPartyId=&pageNum=1&refIndex=40&searchBoxText=&showFilters=0&sortBy=ArrivalDate&sortOrder=Descending";
driver.Navigate().GoToUrl(messagesURL);
ParseMessages(driver);
}
public static void ParseMessages(IWebDriver driver) {
var node = driver.FindElements(By.ClassName("list-row-white"));
foreach (var n in node) {
var refNo = n.FindElement(By.ClassName("data-display-field-border-lbr"));
Console.WriteLine(mi.refNo);
}
}
In this line of code, i get null: var node = driver.FindElements(By.ClassName("list-row-white"));
But when i used selenium alone with actual browser, all is working. But i wanted to get things to be headless.
I am new to phantomJS, correct me if i implemented it correctly and if my code is right.
In some cases, PhantomJS has issues working with css related stuff or element classes.
In such case, converting locator to XPath may solve the problem.
// Thread.Sleep(3000) // Please, replace me with WebDriverWait ^_^
var node = driver.FindElements(By.XPath("//*[contains(#class,'list-row-white')]"));
Another point, PhantomJS works much faster than any other browser.
Try to insert Thread.Sleep before the failed code line.
If the code will pass -- please, replace it with proper WebDriverWait expression
Related
I'm trying to input text into a username field. It appears to find an element, however SendKeys() errors stating that the element is not interactable. I'm already waiting until the element exists, so I wouldn't think its related to waiting. Here is my code:
Console.WriteLine("Hello, World!");
ChromeDriver cd = new
ChromeDriver(#"C:\Users\xxx\Downloads\chromedriver_win32\");
cd.Url = #"https://connect.ramtrucks.com/us/en/login";
cd.Navigate();
WebDriverWait wait = new WebDriverWait(cd,TimeSpan.FromSeconds(10));
IWebElement e = wait.Until(ExpectedConditions.ElementExists(By.ClassName("analytics-login-username")));
e.SendKeys("xxx#gmail.com");
Any suggestions would be much appreciated :)
There are 2 thing you need to fix here:
You are using locator that is not unique.
You need to wait for element clickability, not just existence. I couldn't find element clickability case in C#, so element visibility can be used instead.
So, instead of
IWebElement e = wait.Until(ExpectedConditions.ElementExists(By.ClassName("analytics-login-username")));
e.SendKeys("xxx#gmail.com");
Try this:
IWebElement e = wait.Until(ExpectedConditions.ElementIsVisible(By.CssSelector("input.analytics-login-username"))).SendKeys("xxx#gmail.com");
Quite new to selenium, have been looking to find an answer to this question, but so far, all the times I have tried I was not able to get my desired result.
I followed other answers to access the chrome console logs but i get an exception:
ChromeOptions options = new ChromeOptions();
options.SetLoggingPreference(LogType.Browser, LogLevel.All);
var driver = new ChromeDriver(options);
driver.Manage().Window.Maximize();
driver.Url = "https://test.test";
var homePage = new HomePage(driver); //POM
homePage.SignIn().Click();
homePage.Email("email");
homePage.Password("pw");
homePage.LogIn();
var logs = driver.Manage().Logs.GetLog(LogType.Browser);
foreach (var log in logs)
{
Console.WriteLine(log.ToString());
}
the exception is thrown on : var logs = driver.Manage().Logs.GetLog(LogType.Browser);
System.NullReferenceException: 'Object reference not set to an instance of an object.'
I haven't been able to understand why it is thrown.
After that, i would like to assert the console logs to see if a specific entry is present. Is it possible?
So, this is a dirty workaround. If you get any good answer, please don't use mine.
Modify the default console.log method to store data in a newly introduced global variable:
IJavaScriptExecutor js = (IJavaScriptExecutor)driver;
//Multiline string used for readability. Write it in single line
js.ExecuteScript("
window.oldConsoleLog = window.console.log;
window.logCalls = [];
window.console.log = function(){
oldConsoleLog.apply(window.console, arguments);
window.logCalls.push(arguments);
}
");
Now you'll be able to get all calls using the next code:
var calls = js.ExecuteScript("return window.logCalls");
If you need a cleanup:
js.ExecuteScript("delete window.logCalls;window.console.log = window.oldConsoleLog;")
I'm using Selenium for retrieve data from this site, and I encountered a little problem when I try to click an element within a foreach.
What I'm trying to do
I'm trying to get the table associated to a specific category of odds, in the link above we have different categories:
As you can see from the image, I clicked on Asian handicap -1.75 and the site has generated a table through javascript, so inside my code I'm trying to get that table finding the corresponding element and clicking it.
Code
Actually I have two methods, the first called GetAsianHandicap which iterate over all categories of odds:
public List<T> GetAsianHandicap(Uri fixtureLink)
{
//Contains all the categories displayed on the page
string[] categories = new string[] { "-1.75", "-1.5", "-1.25", "-1", "-0.75", "-0.5", "-0.25", "0", "+0.25", "+0.5", "+0.75", "+1", "+1.25", "+1.5", "+1.75" };
foreach(string cat in categories)
{
//Get the html of the table for the current category
string html = GetSelector("Asian handicap " + asian);
if(html == string.Empty)
continue;
//other code
}
}
and then the method GetSelector which click on the searched element, this is the design:
public string GetSelector(string selector)
{
//Get the available table container (the category).
var containers = driver.FindElements(By.XPath("//div[#class='table-container']"));
//Store the html to return.
string html = string.Empty;
foreach (IWebElement container in containers)
{
//Container not available for click.
if (container.GetAttribute("style") == "display: none;")
continue;
//Get container header (contains the description).
IWebElement header = container.FindElement(By.XPath(".//div[starts-with(#class, 'table-header')]"));
//Store the table description.
string description = header.FindElement(By.TagName("a")).Text;
//The container contains the searched category
if (description.Trim() == selector)
{
//Get the available links.
var listItems = driver.FindElement(By.Id("odds-data-table")).FindElements(By.TagName("a"));
//Get the element to click.
IWebElement element = listItems.Where(li => li.Text == selector).FirstOrDefault();
//The element exist
if (element != null)
{
//Click on the container for load the table.
element.Click();
//Wait few seconds on ChromeDriver for table loading.
driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(20);
//Get the new html of the page
html = driver.PageSource;
}
return html;
}
return string.Empty;
}
Problem and exception details
When the foreach reach this line:
var listItems = driver.FindElement(By.Id("odds-data-table")).FindElements(By.TagName("a"));
I get this exception:
'OpenQA.Selenium.StaleElementReferenceException' in WebDriver.dll
stale element reference: element is not attached to the page document
Searching for the error means that the html page source was changed, but in this case I store the element to click in a variable and the html itself in another variable, so I can't get rid to patch this issue.
Someone could help me?
Thanks in advance.
I looked at your code and I think you're making it more complicated than it needs to be. I'm assuming you want to scrape the table that is exposed when you click one of the handicap links. Here's some simple code to do this. It dumps the text of the elements which ends up unformatted but you can use this as a starting point and add functionality if you want. I didn't run into any StaleElementExceptions when running this code and I never saw the page refresh so I'm not sure what other people were seeing.
string url = "http://www.oddsportal.com/soccer/europe/champions-league/paok-spartak-moscow-pIXFEt8o/#ah;2";
driver.Url = url;
// get all the (visible) handicap links and click them to open the page and display the table with odds
IReadOnlyCollection<IWebElement> links = driver.FindElements(By.XPath("//a[contains(.,'Asian handicap')]")).Where(e => e.Displayed).ToList();
foreach (var link in links)
{
link.Click();
}
// print all the odds tables
foreach (var item in driver.FindElements(By.XPath("//div[#class='table-container']")))
{
Console.WriteLine(item.Text);
Console.WriteLine("====================================");
}
I would suggest that you spend some more time learning locators. Locators are very powerful and can save you having to stack nested loops looking for one thing... and then children of that thing... and then children of that thing... and so on. The right locator can find all that in one scrape of the page which saves a lot of code and time.
As you mentioned in related Post, this issue is because site executes an auto refresh.
Solution 1:
I would suggest if there is an explicit way to do refresh, perform that refresh on a periodic basis, or (if you are sure, when you need to do refresh).
Solution 2:
Create a Extension method for FindElement and FindElements, so that it try to get element for a given timeout.
public static void FindElement(this IWebDriver driver, By by, int timeout)
{
if(timeout >0)
{
return new WebDriverWait(driver, TimeSpan.FromSeconds(timeout)).Until(ExpectedConditions.ElementToBeClickable(by));
}
return driver.FindElement(by);
}
public static IReadOnlyCollection<IWebElement> FindElements(this IWebDriver driver, By by, int timeout)
{
if(timeout >0)
{
return new WebDriverWait(driver, TimeSpan.FromSeconds(timeout)).Until(ExpectedConditions.PresenceOfAllElementsLocatedBy(by));
}
return driver.FindElements(by);
}
so your code will use these like this:
var listItems = driver.FindElement(By.Id("odds-data-table"), 30).FindElements(By.TagName("a"),30);
Solution 3:
Handle StaleElementException using an Extension Method:
public static void FindElement(this IWebDriver driver, By by, int maxAttempt)
{
for(int attempt =0; attempt <maxAttempt; attempt++)
{
try
{
driver.FindElement(by);
break;
}
catch(StaleElementException)
{
}
}
}
public static IReadOnlyCollection<IWebElement> FindElements(this IWebDriver driver, By by, int maxAttempt)
{
for(int attempt =0; attempt <maxAttempt; attempt++)
{
try
{
driver.FindElements(by);
break;
}
catch(StaleElementException)
{
}
}
}
Your code will use these like this:
var listItems = driver.FindElement(By.Id("odds-data-table"), 2).FindElements(By.TagName("a"),2);
Use this:
string description = header.FindElement(By.XPath("strong/a")).Text;
instead of your:
string description = header.FindElement(By.TagName("a")).Text;
Here's the code.
browser = new FirefoxDriver();
browser.Navigate().GoToUrl("https://www.vicroads.vic.gov.au/registration/buy-sell-or-transfer-a-vehicle/buy-a-vehicle/check-vehicle-registration/vehicle-registration-enquiry");
Thread.Sleep(5000);
browser.FindElement(By.Name("ph_pagebody_0$phthreecolumnmaincontent_1$panel$VehicleSearch$RegistrationNumberCar$RegistrationNumber_CtrlHolderDivShown")).SendKeys("asdf");
It works ok but if I run in thread it shows element not visible.... why it's throwing in a thread?
Element could be non-visible cuz page didnt reload at check moment or website using dynamic names, classes etc.
You can try something like this:
IWebDriver browser = new FirefoxDriver();
browser.Navigate().GoToUrl("https://www.vicroads.vic.gov.au/registration/buy-sell-or-transfer-a-vehicle/buy-a-vehicle/check-vehicle-registration/vehicle-registration-enquiry");
while ( true ) {
try {
browser.FindElement(By.Name("ph_pagebody_0$phthreecolumnmaincontent_1$panel$VehicleSearch$RegistrationNumberCar$RegistrationNumber_CtrlHolderDivShown")).SendKeys("asdf");
break;
}
catch { Thread.Sleep(1000);}
}
Going by the xpath you tried out, it seems the name attribute is dynamic. To locate the text box for Registration number You can try either of the following options :
CssSelector :
browser.FindElement(By.CssSelector("input[class=text text xlong v_registrationNumber v_required][id^=ph_pagebody_)]")).SendKeys("asdf");
XPath :
browser.FindElement(By.XPath("//input[#class='text text xlong v_registrationNumber v_required'][starts-with(#id, 'ph_pagebody_')]")).SendKeys("asdf");
Following a tutorial to test search functionality on wikipedia using c#. My test keeps failing because the the text from the h1 element im trying to return keeps returning empty. There is definitely text inside the h1 header. Any idea why This element is returning empty when it has text?
IWebDriver driver = new FirefoxDriver();
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(5));
driver.Navigate().GoToUrl("https://en.wikipedia.org/wiki/Main_Page");
IWebElement searchInput = driver.FindElement(By.Id("searchInput"));
searchInput.SendKeys("Christiaan Barnard");
searchInput.SendKeys(Keys.Enter);
IWebElement firstHeading = driver.FindElement(By.Id("firstHeading"));
Assert.AreEqual("Christiaan Barnard", firstHeading.Text);
driver.Quit();
It may be because the element is found, but does not yet have the expected value. The best approach is to wait until the text has the expected value, using WebDriverWait:
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(5));
var result = wait.Until(ExpectedConditions.TextToBePresentInElementLocated(By.Id("firstHeading"), "Christiaan Barnard"));
Assert.IsTrue(result);