I'm trying to grab the text from a span that's inside a div. The div is currently selected, so it has "curr" within its class.
The DOM:
<a id="ctl00_oAjaxContentPlaceHolder_LinkButtonAlerts" href="javascript:__doPostBack('ctl00$oAjaxContentPlaceHolder$LinkButtonAlerts','')">
<div id="ctl00_oAjaxContentPlaceHolder_divAlertAlertsHolder" class="profile-menu-alerts curr" title="Activities & Alerts">
<span>Activities & Alerts</span>
</div>
</a>
This XPath should find the span (it works when I use the Find tool in DevTools), but it fails to find the element
//div[contains(#class,'curr')]/span
If I remove the /span from the xpath, it finds the div just fine. And the strange part is that if I grab the text of that div with
driver.FindElement(By.XPath("//div[contains(#class,'curr')]")).Text;
it returns "<span>Activities & Alerts</span>". Why is this span element being incorrectly recognized as Text?
I ran this on my solution using the below and had no issues.
var test = Driver.FindElement_byXPath("//div[contains(#class,'curr')]/span").Text;
html - added another option:
<a id="ctl00_oAjaxContentPlaceHolder_LinkButtonAlerts" href="javascript:__doPostBack('ctl00$oAjaxContentPlaceHolder$LinkButtonAlerts','')">
<div id="ctl00_oAjaxContentPlaceHolder_divAlertAlertsHolder" class="profile-menu-alerts" title="Activities & Alerts">
<span>Test 1</span>
</div>
</a>
<a id="ctl00_oAjaxContentPlaceHolder_LinkButtonAlerts" href="javascript:__doPostBack('ctl00$oAjaxContentPlaceHolder$LinkButtonAlerts','')">
<div id="ctl00_oAjaxContentPlaceHolder_divAlertAlertsHolder" class="profile-menu-alerts curr" title="Activities & Alerts">
<span>Activities & Alerts</span>
</div>
</a>
Related
Writing tests with Selenium webdriver in C#. I absolutely can't understand why only the first in a list of (same-level) div elements can be identified with xPath.
I have this html, I have inspected two elements on the page, two different divs. I managed to copy just the text of the first element, by running this SIMPLE code:
IWebElement chapterElement = webDriver.FindElement(By.XPath("/html/body/div[3]/main/div[2]/div[3]/article/div[1]"));
...after which I can just type:
chapterElement.Text to find out the inner text.
And the other one is another div, at the same level as the first, the xPath I just copied from the HTML (copy entire xPath):
IWebElement chapterElement = webDriver.FindElement(By.XPath("/html/body/div[3]/main/div[2]/div[3]/article/div[2]"));
... and it doesn't fail, but it doesn't copy the text also, the text is "" (empty string).
The only differences between the two divs are:
the last segment in the path: div[1] versus div[2].
the second div is actually hidden from the page (probably because it lacks the class "chapter_visible"), but does show up completely in the html with Inspect!
In case this helps, I'm gonna say
"/html/body/div[3]/main/div[2]/div[3]/article/div[1]"
corresponds with:
<div class="chapter chapter chapter_visible" data-chapterno="0" data-chapterid="5e8798266cee070006f5a3d1" style="display: block;">
<h1>some text</h1>
<div class="chapter__content"><p>some text</p>
<p>some text</p>
<p>some text</p>
<ul>
<li>some text</li>
<li>some text</li>
<li>some text.</li>
</ul></div>
</div>
and
"/html/body/div[3]/main/div[2]/div[3]/article/div[2]" (the second xPath)
corresponds to the following (as is located at the same level as the first):
<div class="chapter chapter" data-chapterno="1" data-chapterid="5e8798436cee070006f5a3d2">
<h1>some text</h1>
<div class="chapter__content"><p>some text</p>
<p><strong>some text</strong></p>
<p>some text.</p>
<p>some text</p>
<p>some text</p></div>
</div>
This is my first experience playing around with xPath, a bit disappointed because I just copied the xPath, I didn't even write it manually. It was supposed to be fast and straightforward, right? Thank you.
IWebElement chapterElement = webDriver.FindElement(By.XPath("//div[#class='chapter chapter']"));
Can u try this?
if you want get_attribute
IWebElement chapterElement = webDriver.FindElement(By.XPath("//div[#class='chapter chapter']")).GetAttribute("attribute_name");
I am trying to get a link in a website which changes name on a daily basis. The structure is similar to this (but with many more levels):
<li>
<div class = "contentPlaceHolder1">
<div class="content">
<p>
<strong>'Today's File Here:<strong>
</p>
</div>
</div>
</li>
<li>...<li>
<li>...<li>
<li>...<li>
<li>
<div class = "contentPlaceHolder1">
<div class="content">
<div class="DocLink">
<li>
Download
</li>
</div>
</div>
</div>
</li>
<li>...<li>
etc...
If I find the text (which will remain constant) which is immediately above it in the page by using
IWebElement foundTextElement = chrome.FindElement(By.XPath("//p/strong['Today's File Here:']"));
How can I find the next link in the page by using XPath (or alternative solution)? I am unsure of how to search for the next element after this.
If I use
IWebElement link = chrome.FindElement(By.XPath("//a[#class='txtLnk'"));
then this finds the first link in the page. I only want the first occurance of it after 'foundTextElement'
I have had it working by navigating up the tree to the parent above <li>, and finding the 4th sibling using By.XPath("following-sibling::*[4]/div/div/div/li/a[#class='txtLnk']") but that seems a little precarious to me.
I could parse the HTML until it finds the next occurrence in the html, but was wondering whether there is a more clever way of doing this?
Thanks.
You can try this xpath. It's complicated, as we don't see the rest of the page to optimize it
//li[preceding-sibling::li[.//*[contains(text(),'File Here')]]][.//a[contains(#class,'txtLnk')]][1]
it searches first li which has inside a tag with txtLnk class and it is first found followed after li element with text containing File Here
By.XPath("//a[#class='txtLnk'")
Is a very generic selector, there might be other elements on the page using the same class
You can find this using a CssSelector, try this:
IWebElement aElement = chrome.FindElement(By.CssSelector("div.contentPlaceHolder1 div.content div.DocLink li a"));
Then you can get the href using:
string link = aElement.getAttribute("href") ;
I'm currently attempting to use HtmlAgilityPack to extract specific links from an html page. I tried using plain C# to force my way in but that turned out to be a real pain. The links are all inside of <div> tags that all have the same class. Here's what I have:
HtmlWeb web = new HtmlWeb();
HtmlDocument html = web.Load(url);
//this should select only the <div> tags with the class acTrigger
foreach (HtmlNode node in html.DocumentNode.SelectNodes("//div[#class='acTrigger']"))
{
//not sure how to dig further in to get the href values from each of the <a> tags
}
and the sites code looks along the lines of this
<li>
<div class="acTrigger">
<a href="/16014988/d/" onclick="return queueRefinementAnalytics('Category','Battery')">
Battery <em> (1)</em>
</a>
</div>
</li>
<li>
<div class="acTrigger">
<a href="/15568540/d/" onclick="return queueRefinementAnalytics('Category','Brakes')">
Brakes <em> (2)</em>
</a>
</div>
</li>
<li>
<div class="acTrigger">
<a href="/11436914/d/1979-honda-ct90-cables-lines" onclick="return queueRefinementAnalytics('Category','Cables/Lines')">
Cables/Lines <em> (1)</em>
</a>
</div>
</li>
There are a lot of links on this page, but the href I need are contained inside of those <a> tags which are nested inside of the <div class="acTrigger"> tags. It would be simple if each <a> shared unique classes, but unfortunately only the <div> tags have classes. What I need to do is grab each one of those hrefs and store them so I can retrieve them later, go to each page, and retrieve more information from each page. I just need a nudge in the right direction to get over this hump, then I should be able to do the other pages as well. I have no previous experience with this HtmlAgilityPack and all the example I find seem to want to extract all the URLs from a page, not specific ones. I just need a link to an example or documentation, any help is greatly appreciated.
You should be able to change your select to include the <a> tag: //div[#class='acTrigger']/a. That way your HtmlNode is your <a> tag instead of the div.
To store the links you can use GetAttributeValue.
foreach (HtmlNode node in html.DocumentNode.SelectNodes("//div[#class='acTrigger']/a"))
{
// Get the value of the HREF attribute.
string hrefValue = node.GetAttributeValue( "href", string.Empty );
// Then store hrefValue for later.
}
I have these two following HTML:
-- first HTML
<div id="FIRST">
<span>foo</span>
<div id="SECOND">
<span>bar</span>
</div>
</div>
-- second HTML
<div id="FIRST">
<div id="SECOND">
<span>bar</span>
</div>
</div>
I would like to get the span inside the FIRST div on the first HTML, but there are situations when this span inside the FIRST div doesn't exists as you can see on the second HTML.
Now I am using the following code, but the code is getting the span inside the SECOND div.
SelectSingleNode(".//span")
Obs: Remember that in my example I have only two levels of divs but in my real HTML I have a loooooooot of levels.
I need to get the span considering only tags in the first div
To get only <span>s that is direct child of the <div id="FIRST">, you can either use ./span or span, assuming that the context where you want to call SelectSingleNode() is the aforementioned <div id="FIRST"> :
SelectSingleNode("./span")
SelectSingleNode("span")
Here is an alternative:
SelectSingleNode("span[1]");
This selects the first span element in the HtmlDocument
I am trying to replace the contents of a selected "div" element, and append it to the parent control. So far I am able to clone and append it to the parent, but I want to know how I can replace certain tags inside.
to be specific here is the jquery i use to clone the target control
var x = $(parent).children('div[class="answer"]:first').children('div[class="ansitem"]:first').clone();
the html content inside the clone div is like this :
<div id="ansthumb_anstext_anscontrols">
<div id="image" class="ansthumb">
replace 1
</div>
<div id="atext" class="anstext">
<p class="atext_para">
<span id="mainwrapper_QRep_ARep_0_UName_0" style="color: rgb(51, 102, 255); font-weight: bold;">Replace 2 </span>
Replace 3
</p>
<p id="answercontrols">
<input name="ctl00$mainwrapper$QRep$ctl01$ARep$ctl01$AnsID" id="mainwrapper_QRep_ARep_0_AnsID_0" value='replace 4' type="hidden">
<a id="mainwrapper_QRep_ARep_0_Like_0" title="Like this answer" href="#">Like</a>
<a id="mainwrapper_QRep_ARep_0_Report_0" title="Report question" href="#">Report</a>
<span id="mainwrapper_QRep_ARep_0_lblDatetime_0" class="date"> replace 5 </span>
</p>
</div>
here i have marked all the areas I want to be replaced. The id's of the above div elements are named as such because it is generated within a repeater control.
I have gone through the jquery API and this function seems to be the thing I should be using as far as i understand.
replaceWith(content)
but the drawback of this way is i have to dump the entire html on to a string variable and include replacement text wherever needed. I think it is not the best way, and may be something like selecting particular tags and changing data would be the way to do it. Any help appreicated guys!
thanks
You could use the .html() and a couple other jQuery functions and use the surrounding elements as your selectors.
For example
<script type='text/javascript'>
$("#image").html("YourData1"); //replace 1
var secondSpan = $("#mainwrapper_QRep_ARep_0_UName_0");
$(secondSpan).html("YourData2"); //replace 2
$(secondSpan).after("YourData3"); //replace 3
$("#mainwrapper_QRep_ARep_0_AnsID_0").attr("value", "YourData4"); //replace 4
$("#mainwrapper_QRep_ARep_0_lblDatetime_0").html("YourData5"); //replace 5
</script>
Since these ids are defined by .NET, you can get the ClientID of the .NET control.
For example:
var secondSpan = $("#<%= UName.ClientID %>");
Hope this helps!