There is one website named "www.localbanya.com", i wanted to grab the HTML information from that site, they list products, the structure of their display is:
First they display some around 8-10 products on page-load, and
later when user scrolls down it generates more products.
Now as this is happening based on javascript, i am not able to get the whole page source using WebClient.
I wanted to know is there any way i can update the page-source while using WebClient class in .net to retrieve whole page information or any other alternative i can use to get the whole page HTML information, at once.
You can refer this for reference localbanya product page
Any help will be a appreciated.
WebClient obviously doesn't run the javascript.
so you gonna need some sort of a headless browser to do it.
There are many options for it, though I don't know any C# or .NET implementation..
You may look into Phantom JS and other headless browsers which replicate what a normal browser does and you can write scripts for it.
Also refer to this question
Headless browser for C# (.NET)?
You can also run something like Fiddler to see what requests were made from the page when scrolling down, to reverse engineer how the data is retrieved, and replicate that with a WebClient if possible.
Hope this Helps.
Related
I'm trying to parse a website. The only problem is that the site dosen't use a specific URL to the site I wan't to parse. The content is being displayed to the site using JavaScript on the same webpage so the content is different depending on the searchquery.
Is it possible to choose a value from a dropdown-menu and then post that to the server and then parse the HTML-code in C#?
Clarification:The code is returned in HTML.
I know the name of the option from the dropdown i want to post, but how do I do that from code-behind?
Most sites do not really generate HTML in Javascript. Much more often you see Asp.Net sites where Javascript is used for a postback (and name of the dropdown is posted back in __EVENTTARGET field)
Then you can do the same in your application - you have to imitate filling the form - pass all the fields to the server including VIEWSTATE and EVENTTARGET.
Having said that, it might be against the site's terms of use.
You definitely need to checkout Selenium, it does exactly what you need. It is commonly used as a testing framework. However you can use it to manipulate HTML tags even when the website uses javascript.
Note: Selenium allows you to open and manipulate a website using a browser such as FireFox, Chrome, IE, etc. However, I think what you need here is to use the WebDriver, which manipulates the website without opening a browser. Most of my experience using Selenium is with Java, but I found multiple tutorials online for .net too.
I'd like to create a web page where you can enter your domain name and have it fetch it and show you all the resources, their download times, etc -- similar to FireFox's NET tab.
Here's the page which I'd like emulate: http://tools.pingdom.com/
Now, I know this is a complex feature, but I'd like to hear general ideas. I know I could easily fetch the HTML via a WebClient, but that's the easy part. I need to fetch and time all the resources too, and not all at the same time. I want to mimic a browser. So, I thought about using something like System.Windows.Forms.WebBrowser, but that will only really give me the page load time.
Anyone have any thoughts / tips?
Using the Html Agility Pack you can easily find which external resources are referenced from an HTML page.
This won't tell you exactly when they would be loaded by the browser, and also won't help you with dynamically loaded resources, but is a good start.
I'm afraid the only way to be sure is to instantiate an entire browser. You could use a plug in for the Fiddler HTTP debugging proxy to intercept requests from the WebBrowser control to determine which resources are actually loaded in this case.
Im trying to scrape a page. Everything is ok, but when values are updated, the sourse code of page is still the same for a minute. Even when i refresh a page with slow internet connection, first i see old data, and only after page gets fully loaded values are current.
I guess javascript updates them. BUt it still has to download them somehow.
How can i get current values?
I write my program in C#, but if you have some ideas/advices/examples language doesnt really matter.
Thank you.
You're right - javascript is probably updating the data after load.
I could think of three ways to handle this:
Use a webbrowser control - I guess your using the HttpWebRequest object to retrieve values from the site. This won't work if you need to let the javascript to run. You can use the webbrowser control, let the javascript run and retrieve values from the DOM. Only thing I don't like about this approach is it feels like a hack and probably too clunky for prod applications. You also need to know when to read the contents of the DOM (an update might be inprogress in the background). Google "C# WebBrowser Control Read DOM Programmatically" or you can read more about that here.
I personally prefer this over the previous but it doesn't work all the time. First you need to inspect the website from firebug or something and see which urls are called from the background. Say for example the site is updating stock quotes using javascript. Most likely, its using an asynchronous request to retrieving the updated information from a webservice. Using firebug, you can view this under NET>XHR. Now is the hard part. Well, take a look at the request and the values returned. The idea is, you can try to retrieve the values your self and parse the contents - which can be a lot easier than scraping a page. The problem is, you would need to do a bit of reverse engineering to get it right. You might also encounter problems with authentication and/or encryption.
Lastly and my most preferred solution is asking the owner [of the site your are scraping] directly.
I think the WebBrowser control approach is probably OK and doesn't depend on third party libraries. Here is what I intend to use and it solves the problem of waiting for the page to complete loading:
private string ReadPage(string Link)
{
using (var client = new WebClient())
{
this.wbrwPages.Navigate(Link);
while (this.wbrwPages.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
ReadPage = this.wbrwPages.DocumentText;
}
}
I will get information out of the HTML through some form of DOM or XPath treatment. I am curious if others will have comments about entering the 'while' loop and depending upon the 'complete' state to get me out of it. I may put a timer of some sort in there as well - just to be safe.
I have been given a task to crawl / parse and index available books on many library web page. I usually use HTML Agility Pack and C# to parse web site content. One of them is the following:
http://bibliotek.kristianstad.se/pls/bookit/pkg_www_misc.print_index?in_language_id=en_GB
If you search for a * (all books) it will return many lists of books, paginated by 10 books per page.
Typical web crawlers that I have found fail on this website. I have also tried to write my own crawler, which would go through all links on the page and generate post/get variables to dynamically generate results. I havent been able to do this as well, mostly due to some 404 errors that I get (although I am certain that the links generated are correct).
The site relies on javascript to generate content, and uses a mixed mode of GET and POST variable submission.
I'm going out on a limb, but try observing the JavaScript GETs and POSTs with Fiddler and then you can base your crawling off of those requests. Fiddler has FiddlerCore, which you can put in your own C# project. Using this, you could monitor requests made in the WebBrowser control and then save them for crawling or whatever, later.
Going down the C# JavaScript interpreter route sounds like the 'more correct' way of doing this, but I wager it will be much harder and frought with errors and bugs unless you have the simplest of cases.
Good luck.
FWIW, the C# WebBrowser control is very, very slow. It also doesn't support more than two simultaneous requests.
Using SHDocVw is faster, but is also semaphore limited.
Faster still is using MSHTML. Working code here: https://svn.arachnode.net/svn/arachnodenet/trunk/Renderer/HtmlRenderer.cs Username/Password: Public (doesn't have the request/rendering limitations that the other two have when run out of process...)
This is headless, so none of the controls are rendered. (Faster).
Thanks,
Mike
If you use the WebBrowser control in a Windows Forms application to open the page then you should be able to access the DOM through the HtmlDocument. That would work for the HTML links.
As for the links that are generated through Javascript, you might look at the ObjectForScripting property which should allow you to interface with the HTML page through Javascript. The rest then becomes a Javascript problem, but it should (in theory) be solvable. I haven't tried this so I can't say.
If the site generates content with JavaScript, then you are out of luck. You need a full JavaScript engine usable in C# so that you can actually execute the scripts and capture the output they generate.
Take a look at this question: Embedding JavaScript engine into .NET -- but know that it will take "serious" effort to do what you need.
AbotX does javascript rendering for you. Its not free though.
I need to write a C# code for grabbing contents of a web page. Steps looks like following
Browse to login page
I have user name and a password, provide it programatically and login
Then you are in detail page
You have to get some information there, like (prodcut Id, Des, etc.)
Then need to click(by code) on Detail View
Then you can get the price for that product from there.
Now it is done, so we can write detail line into text file like this...
ABC Printer::225519::285.00
Please help me on this, (Even VB.Net Code is ok, I can convert it to C#)
The WatiN library is probably what you want, then. Basically, it controls a web browser (native support for IE and Firefox, I believe, though they may have added more since I last used it) and provides an easy syntax for programmatically interacting with page elements within that browser. All you'll need are the names and/or IDs of those elements, or some unique way to identify them on the page.
You should be able to achieve this using the WebRequest class to retrieve pages, and the HTML Agility Pack to extract elements from HTML source.
yea I downloaded that library. Nice one.
Thanks for sharing it with me. But I have a issue with that library. The site I want to get data is having a "captcha" on the login page.
I can enter that value if this can show image and wait for my input.
Can we achive that from this library, if you can like to have a sample.
You should be able to achieve this by using two classes in C#, HttpWebRequest (to request the web pages) and perhaps XmlTextReader (to parse the HTML/XML response).
If you do not wish to use XmlTextReader, then I'd advise looking into Regular Expressions, as they are fantastically useful for extracting information from large bodies of text where-in patterns exist.
How to: Send Data Using the WebRequest Class