Found a nasty bug in HTMLAgilityPack whereby some attribute values are NOT returned fully - they are truncated. Specifically, when attempting to get the href value out of an anchor tag, only the root domain is returned, anything following (the query string) is completely ignored. Anyone know a good workaround?
Example:
node.SelectSingleNode("//link").Attributes["href"].Value
returns https://www.example.com
instead of returning https://www.example.com/mypage.php?_src=ffk_title&ffkid=66534&site=data:http%3A%2F%2Fwww.othersite.com%2Frss%2F
the link looks like so
<a class="tlink" href="https://www.example.com/mypage.php?_src=ffk_title&ffkid=66534&site=data:http%3A%2F%2Fwww.othersite.com%2Frss%2F" target="_blank">Click to get feed</a>
Anyway - right now, I'll just get the link tag and parse with old methods - I figure HTMLAgilityPack gets confused if there are atypical characters in the href tag. I hope it's just something I'm doing wrong, but this kind of quirk is really hurts.
For anchor tags, you should use //a XPath expression:
node.SelectSingleNode("//a").Attributes["href"].Value;
Additionally, if you need to reference an anchor with a particular class, you could use:
node.SelectSingleNode("//a[#class='tlink']").Attributes["href"].Value;
A working example can be seem here.
Related
I am using HtmlAgilityPack. I create an HtmlDocument and LoadHtml with the following string:
<select id="foo_Bar" name="foo.Bar"><option selected="selected" value="1">One</option><option value="2">Two</option></select>
This does some unexpected things. First, it gives two parser errors, EndTagNotRequired. Second, the select node has 4 children - two for the option tags and two more for the inner text of the option tags. Last, the OuterHtml is like this:
<select id="foo_Bar" name="foo.Bar"><option selected="selected" value="1">One<option value="2">Two</select>
So basically it is deciding for me to drop the closing tags on the options. Let's leave aside for a moment whether it is proper and desirable to do that. I am using HtmlAgilityPack to test HTML generation code, so I don't want it to make any decision for me or give any errors unless the HTML is truly malformed. Is there some way to make it behave how I want? I tried setting some of the options for HtmlDocument, specifically:
doc.OptionAutoCloseOnEnd = false;
doc.OptionCheckSyntax = false;
doc.OptionFixNestedTags = false;
This is not working. If HtmlAgilityPack cannot do what I want, can you recommend something that can?
The exact same error is reported on the HAP home page's discussion, but it looks like no meaningful fixes have been made to the project in a few years. Not encouraging.
A quick browse of the source suggests the error might be fixable by commenting out line 92 of HtmlNode.cs:
// they sometimes contain, and sometimes they don 't...
ElementsFlags.Add("option", HtmlElementFlag.Empty);
(Actually no, they always contain label text, although a blank string would also be valid text. A careless author might omit the end-tag, but then that's true of any element.)
ADD
An equivalent solution is calling HtmlNode.ElementsFlags.Remove("option"); before any use of liberary (without need to modify the liberary source code)
It seems that there is some reason not to parse the Option tag as a "generic" tag, for XHTML compliance, however this can be a real pain in the neck.
My suggestion is to do a whole-string-replace and change all "option" tags to "my_option" tags, that way you:
Don't have to modify the source of the library (and can upgrade it later).
Can parse as you usually would.
The original post on HtmlAgilityPack forum can be found at:
http://htmlagilitypack.codeplex.com/Thread/View.aspx?ThreadId=14982
I'm trying to scrape a webpage (Pub Med) to see how many references appear in specific articles(some articles have references, some don't). However, the problem i'm having right now is that the divs are all nested and named the same thing so I haven't been able to figure out what code is required to get the elements.
So far i've tried using contains to see if I could just grab a catch all and dig my way into the node from there but that hasn't worked.
.SelectNodes("//div[contains(#class,'portlet_title')]");
I also have tried copying the XPath but all I would get is null as a result
.SelectNodes("//*[#id="disc_col"]/div[3]/div[1]/div/h3/span");
Any help would be appreciated as I am no master at Xpath.
And for reference, a page that would fit my criteria is:
http://www.ncbi.nlm.nih.gov/pubmed/?term=23489346 (right hand side says Cited by * articles).
I've also browsed some other responses however they all seemed to be for results with differently named Divs ( ie get all the divs ids on a html page using Html Agility Pack). Either I dont understand how to use this correctly, or my problem is different.
Thanks again.
Mike! Try use
var titles = website.DocumentNode.SelectNodes("//div[#class='portlet_title']");
The errors in your XPaths are: 1. attributes are written just in "[]" with "#" symbol like I wrote; 2. in every XPath node you should write an index e.g. "//div[3]/div[1]/div[1]/h3[1]/span[1]".
Good luck!
I'm having what seems like a really simple problem. I'm trying to navigate to an element in HTML by Xpath, and can't seem to get it to function properly.
I want to grab a span from the html contents of a page. The page is fairly complex, so I've been using Firebug's "get element by xpath" and pasting the result into my code. I've noticed it's slightly different than the xpath you get from doing the same thing in Chrome, but they both seem to direct to the same place.
The html I'm trying to navigate is found here. The field I'm trying to access via xpath is the first "Results 1 - 10 of n".
Based on FireBug's 'inspect element' the xpath should be: /html/body/div/center/table/tbody/tr[6]/td/table/tbody/tr/td[2]/table/tbody/tr/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/tr/td/span
However when I try to use this xpath to identify the element in a C# codebehind, it gives me a number of errors that that path cannot be found.
Am I doing something wrong here? I've tried a number of permutations of the xpath and I don't understand why this wouldn't be cooperating within code.
Edit: I'm having this problem both in HTMLAgilityPack (but managed to hack out a bad solution using regexes instead) and a SELECT statement modeled after the answer found here
Edit 2: I'm trying to figure out this issue using Yahoo's free proxy as shown in the example here:
var query = 'SELECT * FROM html WHERE url="http://mattgemmell.com/2008/12/08/what-have-you-tried/" and xpath="//h1" and class="entry-title"';
var url = "http://query.yahooapis.com/v1/public/yql?q=" + query + "&format=json&callback=??";
$.getJSON(url,function(data){
alert(data.query.results.h1.content);
})
I'm having the same problems with HTML agility pack but I'm more interested in getting this part to work. It works for the provided URL that the answerer gave me (seen above). However when I try to use even simple xpath expressions on the http://nl.newsbank.com url, I get errors that no object has been retrieved every time, no matter how basic the xpath.
Edit 3: I thought I'd elaborate a little more on the big picture of the larger problem I'm trying to solve of which this problem is a critical component in the hopes that maybe it provides a little more insight.
To learn basic ASP.NET development skills from scratch, I decided to make a simple web application, based around the news archive search at http://nl.newsbank.com/. In its current iteration, it sends a POST request (although I've now learned you can use a GET request and just dump the body at the end of the URL) to send search criteria, as if the user entered criteria in the search bar. It then searches the response (using RegExes, not Xpath because I couldnt get that working) for the "Results 1-n of n" span, extracts n, and dumps it in a table. It's a cool little tool for looking up news occurrence rates over time.
I wanted to add functionality such that you could enter a date range (say May 2002 - June 2010) and run a frequency search for every month / week in that range. This is very easy to implement conceptually. HOWEVER the problem is, right now all this happens server side, and since there's no API, the HTTP response contains the entire page, and is therefore very bandwidth intensive. Sending dozens of queries at once would swallow absolutely unspeakable amounts of bandwidth and wouldn't be even a little scalable.
As a result I tried rewriting the application to work client-side. However because of the same-origin policy I'm not able to send a request to an external host from the client-side. HOWEVER there is a loophole that I can use a free Yahoo proxy that makes the request and converts it to JSON, and then I can use the JSON exception of the Same-Origin Policy to retrieve that data from the proxy.
Here's where I'm running into these xpath problems specific to http://nl.newsbank.com. I'm not able to retrieve html with any xpath, and I'm not sure why or how I can fix it. When debugging in VS2010, I'll receive the error Microsoft JScript runtime error: Unable to get value of the property 'content': object is null or undefined
As paul t. already mentioned in a comment, the TBODY elements are generated by the webkit engine. The next problem is that the DIV between the BODY and CENTER does not exist on the page by default. It is added by an JS statement on line 119.
After stripping out the DIV and TBODY elements like
/html/body/center/table/tr[6]/td/table/tr/td[2]/table/tr/td/table/tr/td/table/tr/td/table/tr/td/span
i can successfull select a node with the HthmlAgilityPack.
Edit: don't use tools like Firebug for getting an XPath value on a website. Don't even use it if you just want wo look at the source of the page. The problem with Firebug is, that it will show you the current DOM document tree which probably on almost every is already (heavily) modified by JS.
Your sample HTML page's elements haven't got many classes to select on, but if you're interested in the first <span> element that contains "Results: 1 - 10 of n", you can use an XPath expression that explicitly targets this textual content.
For example:
//table//span[starts-with(., "Results:")]
will select all <span> elements, contained in a <table>, and that contain text beginning with "Results:" (the //table is not strictly necessary in your case I think, but might as well restrict a little)
You want the first one of these <span>, so you can use this expression:
(//table//span[starts-with(., "Results:")])[1]
Note the brackets around the whole previous expression and then [1] to select the first of all the <span> matching the text
It may sound kind of simplistic, but the element you are looking for is the only doc element that is using the css class "basic-text-white". I would think this would be a lot easier to find and extract than a long xpath. Web-scraping is never a stable thing, but I would think this is probably as stable as the xpath. Trying to debug the xpath just about makes my eyes bleed.
I've tried this and searched for help but I cannot figure it out. I can get the source for a page but I don't need the whole thing, just one string that is repeated. Think of it like trying to grab only the titles of articles on a page and adding them in order to an array without losing any special characters. Can someone shed some light?
You can use a Regular Expression
to extract the content you want from a string, such as your html string.
Or you can use a DOM parser such as
Html Agility Pack
Hope this helps!
You could use something like this -
var text = "12 hello 45 yes 890 bye 999";
var matches = System.Text.RegularExpressions.Regex.Matches(text,#"\d+").Cast<Match>().Select(m => m.Value).ToList();
The example pulls all numbers in the text variable into a list of strings. But you could change the Regular Expression to do something more suited to your needs.
if the page is well-formed xml, you could use linq to xml by loading the page into an XDocument and using XPath or another way of traversing to the element(s) you desire and loading what you need into the array for which you are looking (or just use the enumerable if all you want to do is enumerate). if the page is not under your control, though, this is a brittle solution that could break at any time when subtle changes could break the well-formedness of the xml. if that's the case, you're probably better off using regular expressions. eiither way, though, the page could be changed under you and your code suddenly won't work anymore.
the best thing you could do would be to get the provider of the page to expose what you need as a webservice rather than trying to scrape their page.
I have something of a a hairy problem, I'd like to generate a couple of paragraphs of "description" of a given url, normally the start of an article. The Meta description field is one way to go but it isn't always good or set properly.
It's fair to say it's a bit problematic to accomplish this from the screenscraped HTML. I had a general idea that perhaps one could scan the HTML for the first "appropriate" segment but it's hard to say what that is, perhaps something like the first paragraph containing a certain amount of text...
Anyone have any good ideas? :) It doesn't have to be foolproof
So, you wanna become a new Google, heh? :-)
Many sites are "SEO friendly" these days. This enables you to go for the headings and then look for paragraphs bellow.
Also, look for lists. There is a lot of content in some sort of tab-like (tabs, accordions...) interfaces that is done using ordered or unordered lists.
If that fails, maybe look for a div with class "content" or "main" or a combination and start from there.
If you use different approaches, make sure you keep statistics of what worked and what didn't (maybe even save a full page), so you can review and tweak your parsing and searching methods.
As a side note, I've used htmlagilitypack to parse and search through html with success. Well, at leasts it beats parsing with regex :-)
Perhaps look for the div element that contains the most p elements, and then grab the first p child. If no div, get the first p from the body element.
This will always have its problems.
You can strip the HTML tags using this regular expression
string stripped = Regex.Replace(textBox1.Text,#"<(.|\n)*?>",string.Empty)
You will them get the content text you can use to generate your paragraphs.