How do I validate a html file with C#? - c#

I have a C# application that receives an html file. I want to parse and validate it. On output it will return a list of errors or that my html is valid.
Has anyone any idea how can I do this?

I'd run a local instance of the W3C Markup Validation service and communicate with it via the API

You can use HTML Tidy. There is a wrapper for .NET called TidyManaged

There is an obscure DLL in the framework version 1.0 (!) Microsoft.mshtml.dll and that is the only way in the framework to deal with DOM. If HTML is XHTML and a valid XML, then you can use XML but otherwise this is the only chance.

Related

Is there a way to extract a specific html tag info with NCrawler

Specs: Win7 64, VS 2010, .NET 4.0, NCrawler library
I'm writing a crawler that will extract some data from a online shop. The application extracts the URLs fine, and I can navigate to every item from the shop properly. The problem is that every "propretyBag" object that keeps all the page data of the product is in a text form, I was wondering if there is a way to read the contents of a specific tag like <-description>Text<-/descriptopn> from this "propertyBag" or there is another way to do it.
THx
You need a HTML parser like HtmlAgilityPack (http://htmlagilitypack.codeplex.com/) to extract the required information.
But I would recommend to use Abot (https://code.google.com/p/abot/) as web crawler. It is an activly developed free open source web crawler written in C#.
Abot has built in HTML parsers like HtmlAgilityPack (extract elements via XPath) and CsQuery (extract elements via CSS selectors).

Can Webbrowser Control handle "bad" HTML?

I'm working with the Webbrowser control in C# and trying to access some HtmlElements in the document.
The problem is that the document Body only contains two out of five children. (http://www.target.com/cart/ref=nav_sc_rev_checkout). So I can't access specific elements in the body, although it renders fine in the UI.
I suspect that there is bad HTML in the Body so that the Document Tree is corrupt?
Is there a way to handle this, since it still renders nice..?
Thanks.
Update:
The problem was that the DocumentCompleted event was triggered but the Doc was not fully parsed so that was why I only got 2 out of 5 elements.
Yes, the WebBrowser is a wrapper round IE and it will handle bad HTML as good as it can.
Can't you simply write the contents to a text file instead of a web browser control and make that into a HTML file. Then load it in your browser and inspect with the dev tool of your choice.
Beside the fact the html code of this site have more than 200 errors (mostly missing entities), you can try to load the code into an XmlDocument or XDocument inside your program and access the nodes you want via XPath.
If you need to programmatically interact with HTML and more specifically bad HTML I would suggest you to take a look at HTMLAgilityPack.
This is an agile HTML parser that
builds a read/write DOM and supports
plain XPATH or XSLT (you actually
don't HAVE to understand XPATH nor
XSLT to use it, don't worry...). It is
a .NET code library that allows you to
parse "out of the web" HTML files. The
parser is very tolerant with "real
world" malformed HTML. The object
model is very similar to what proposes
System.Xml, but for HTML documents (or
streams).

Manipulating HTML files

I'm working on a browser-like application which gets HTML from a site (any website) then applies a style-script over it to change certain elements (just like greasemonkey).
My initial plan is to parse the HTML using XPath and XmlDocument, but is there a better way?
Thanks in advance!
Ps> Handy tips, tricks & links on HTML+C# would be great~ ^^
use the HTML Aglility pack. You can find it here: http://www.codeplex.com/htmlagilitypack
HTML is not always follows XML rules, for example there are tags in html, that may not have close tag, so XPath and XDocument will sometimes throw errors. IE API gives you ability to do that(see here), you can also find 3-rd party parsers for that (see this o this)
I would highly recomend using XSLT. This allows you to keep all your transformational data OUTSIDE your code, and therefore, making it really easy to change it if the HTML to be transformed is modified, or you want to change your layout.
Non the less, if using HTML and not XHTML, beware of possible errors. Non the less, using a Tidy library can help you overcome this.
I would really recommend using a package for your programming language of choice that handles all the oddities of HTML parsing. I've used Hpricot in Ruby before and it's made things a breeze.
If you want to be able to browse the HTML based on its content, XPath is a good choice. But you'll have to clean up the HTML first. You can use HTML tidy to convert the HTML to XHTML. In the process you might modify how the page renders. But it seems to be the purpose of your project so that's not a big deal.

How to parse an XHTML file that is not 100% valid?

I have XHTML files whose source is not completely valid, it does not follow the DTD of an XML document.
Like there are places where for " it uses &Idquo; or for apostrophes it uses ’. This causes exceptions in my C# code.
So is there any method or any weblink that i can use to get rid of this?
If the file is otherwise well-formed you can define the character entities in your own DTD.
If the file is ill-formed the HTML Agility Pack from CodePlex will parse it.
You could parse the document as HTML instead since they both end up in a DOM and HTML parsers scoff at these pansy quotation mark problems. Going along with unknown's HTML Tidy idea, you could then serialize the DOM back into a valid XHTML file. (This is identical to using HTML Tidy, wihch presumably uses an HTML parser anyway, except you'd do it from C# programatically.)
Well by the nature of XML it needs to be valid otherwise it won't render at all. I'd first see what type of errors it generates with W3C's validator http://validator.w3.org/
Also consider using HTML tidy, which can be configured to fix XML as well.
We use hpricot to fix our XML, but then again we are building rails apps. Not sure about C#

Parsing HTML Fragments

What's the best way to parse fragments of HTML in C#?
For context, I've inherited an application that uses a great deal of composite controls, which is fine, but a good deal of the controls are rendered using a long sequence of literal controls, which is fairly terrifying. I'm trying to get the application into unit tests, and I want to get these controls under tests that will find out if they're generating well formed HTML, and in a dream solution, validate that HTML.
Have a look at the HTMLAgility pack. It's very compatible with the .NET XmlDocument class, but it much more forgiving about HTML that's not clean/valid XHTML.
If the HTML is XHTML compliant, you can use the built in System.Xml namespace.
I've used an SGMLReader to produce a valid Xml document from HTML and then parse what is required using XPath or to another format using XSLT. .
You can also look into HTML Tidy for HTML parsing/cleanup. I don't think they have specific .NET libraries, but you might be able to run the binary via command-line, or IKVM the java libraries.

Categories

Resources