Post data to server and then parse the HTML-code C# - c#

I'm trying to parse a website. The only problem is that the site dosen't use a specific URL to the site I wan't to parse. The content is being displayed to the site using JavaScript on the same webpage so the content is different depending on the searchquery.
Is it possible to choose a value from a dropdown-menu and then post that to the server and then parse the HTML-code in C#?
Clarification:The code is returned in HTML.
I know the name of the option from the dropdown i want to post, but how do I do that from code-behind?

Most sites do not really generate HTML in Javascript. Much more often you see Asp.Net sites where Javascript is used for a postback (and name of the dropdown is posted back in __EVENTTARGET field)
Then you can do the same in your application - you have to imitate filling the form - pass all the fields to the server including VIEWSTATE and EVENTTARGET.
Having said that, it might be against the site's terms of use.

You definitely need to checkout Selenium, it does exactly what you need. It is commonly used as a testing framework. However you can use it to manipulate HTML tags even when the website uses javascript.
Note: Selenium allows you to open and manipulate a website using a browser such as FireFox, Chrome, IE, etc. However, I think what you need here is to use the WebDriver, which manipulates the website without opening a browser. Most of my experience using Selenium is with Java, but I found multiple tutorials online for .net too.

Related

Excecute script with HtmlAgilityPack [duplicate]

I'm trying to scrape a particular webpage which works as follows.
First the page loads, then it runs some sort of javascript to fetch the data it needs to populate the page. I'm interested in that data.
If I Get the page with HtmlAgilityPack - the script doesn't run so I get what it essentially a mostly-blank page.
Is there a way to force it to run a script, so I can get the data?
You are getting what the server is returning - the same as a web browser. A web browser, of course, then runs the scripts. Html Agility Pack is an HTML parser only - it has no way to interpret the javascript or bind it to its internal representation of the document. If you wanted to run the script you would need a web browser. The perfect answer to your problem would be a complete "headless" web browser. That is something that incorporates an HTML parser, a javascript interpreter, and a model that simulates the browser DOM, all working together. Basically, that's a web browser, except without the rendering part of it. At this time there isn't such a thing that works entirely within the .NET environment.
Your best bet is to use a WebBrowser control and actually load and run the page in Internet Explorer under programmatic control. This won't be fast or pretty, but it will do what you need to do.
Also see my answer to a similar question: Load a DOM and Execute javascript, server side, with .Net which discusses the available technology in .NET to do this. Most of the pieces exist right now but just aren't quite there yet or haven't been integrated in the right way, unfortunately.
You can use Awesomium for this, http://www.awesomium.com/. It works fairly well but has no support for x64 and is not thread safe. I'm using it to scan some web sites 24x7 and it's running fine for at least a couple of days in a row but then it usually crashes.

How to update viewsource of the webpage dynamically in .net

There is one website named "www.localbanya.com", i wanted to grab the HTML information from that site, they list products, the structure of their display is:
First they display some around 8-10 products on page-load, and
later when user scrolls down it generates more products.
Now as this is happening based on javascript, i am not able to get the whole page source using WebClient.
I wanted to know is there any way i can update the page-source while using WebClient class in .net to retrieve whole page information or any other alternative i can use to get the whole page HTML information, at once.
You can refer this for reference localbanya product page
Any help will be a appreciated.
WebClient obviously doesn't run the javascript.
so you gonna need some sort of a headless browser to do it.
There are many options for it, though I don't know any C# or .NET implementation..
You may look into Phantom JS and other headless browsers which replicate what a normal browser does and you can write scripts for it.
Also refer to this question
Headless browser for C# (.NET)?
You can also run something like Fiddler to see what requests were made from the page when scrolling down, to reverse engineer how the data is retrieved, and replicate that with a WebClient if possible.
Hope this Helps.

C# programmatically communication with website

I have following problem.
Example site http://eisk.apphb.com/web-form-samples/listing-page.aspx
My c# application has to read data from gridview, but only for specific supervisor, so i need to change programmatically value in drop down list.
I have problem with change this value and get site with actually data in grid view.
Please help me solve this case.
I did something related in the past using Selenium, but now the changed the API and it's not so easy as it was before. (They merged Selenium with Web Driver)
You can see more about that in here:
http://www.seleniumhq.org/docs/05_selenium_rc.jsp
You can also use Watin to do the same.
http://www.codeproject.com/Articles/17064/WatiN-Web-Application-Testing-In-NET
You could use an HttpWebRequest to download the site content then use HtmlAgilityPack to parse the HTML and get the data.

How to read html content of Ajax based site

I would like to know how the HTML source of ajax based sites can be read using HttpWebRequest / HttpWebResponse (That is reading the contents of a website at server side). The problem that I'm facing is that I'm unable to read parts of the webpage which uses Ajax or stuffs like UpdatePanel.
My application is in ASP.NET / C#, so can't think of using stuffs like Browser control or mshtml.dll since I would not be able to serve multiple requests.
Thanks in advance.
this is going to be difficult.
I know you said you don't want to use Browser control, but I'm going to say it anyway. You will most probably be better off using a Browser control. The reasons are as follow:
AJAX sites make multiple calls from the browser to the server to obtain the required view.
The multiple calls are being performed via JavaScript
The data returned from the server may be reformatted by JavaScript before being updated onto the view.
If you are going to do this using HttpWebXyz functions, you will have to do the following:
Make the relevant calls to get the initial page source.
Parse the page for JavaScript.
Evaluate/execute the JavaScript. This may include providing the relevant implementation for functions such as alert and making subsequent calls to the server.
Depending on the complexity of the AJAX site, you may want to reconsider using the browser control. Complex sites are easier process by the control. If the site is simple enough, you may survive parsing and executing the required JavaScript.
This example uses a deprecated class to parse JavaScript.
You may want to explore ICodeCompiler and its relevant classes for the new approach.
Good luck.

Grab details from web page

I need to write a C# code for grabbing contents of a web page. Steps looks like following
Browse to login page
I have user name and a password, provide it programatically and login
Then you are in detail page
You have to get some information there, like (prodcut Id, Des, etc.)
Then need to click(by code) on Detail View
Then you can get the price for that product from there.
Now it is done, so we can write detail line into text file like this...
ABC Printer::225519::285.00
Please help me on this, (Even VB.Net Code is ok, I can convert it to C#)
The WatiN library is probably what you want, then. Basically, it controls a web browser (native support for IE and Firefox, I believe, though they may have added more since I last used it) and provides an easy syntax for programmatically interacting with page elements within that browser. All you'll need are the names and/or IDs of those elements, or some unique way to identify them on the page.
You should be able to achieve this using the WebRequest class to retrieve pages, and the HTML Agility Pack to extract elements from HTML source.
yea I downloaded that library. Nice one.
Thanks for sharing it with me. But I have a issue with that library. The site I want to get data is having a "captcha" on the login page.
I can enter that value if this can show image and wait for my input.
Can we achive that from this library, if you can like to have a sample.
You should be able to achieve this by using two classes in C#, HttpWebRequest (to request the web pages) and perhaps XmlTextReader (to parse the HTML/XML response).
If you do not wish to use XmlTextReader, then I'd advise looking into Regular Expressions, as they are fantastically useful for extracting information from large bodies of text where-in patterns exist.
How to: Send Data Using the WebRequest Class

Categories

Resources