I'm downloading a web site using WebClient
public void download()
{
client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client.Encoding = Encoding.UTF8;
client.DownloadStringAsync(new Uri(eUrl.Text));
}
void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
SaveFileDialog sd = new SaveFileDialog();
if (sd.ShowDialog() == DialogResult.OK)
{
StreamWriter writer = new StreamWriter(sd.FileName,false,Encoding.Unicode);
writer.Write(e.Result);
writer.Close();
}
}
This works fine. But I am unable to read content that is loaded using ajax. Like this:
<div class="center-box-body" id="boxnews" style="width:768px;height:1167px; ">
loading .... </div>
<script language="javascript">
ajax_function('boxnews',"ajax/category/personal_notes/",'');
</script>
This "ajax_function" downloads data from server on the client side.
How can I download the full web html data?
To do so, you would need to host a Javascript runtime inside of a full-blown web browser. Unfortunately, WebClient isn't capable of doing this.
Your only option would be automation of a WebBrowser control. You would need to send it to the URL, wait until both the main page and any AJAX content has been loaded (including triggering that load if user action is required to do so), then scrape the entire DOM.
If you are only scraping a particular site, you are probably better off just pulling the AJAX URL yourself (simulating all required parameters), rather than pulling the web page that calls for it.
I think you'd need to use a WebBrowser control to do this since you actually need the javascript on the page to run to complete the page load. Depending on your application this may or may not be possible for you -- note it's a Windows.Forms control.
When you visit a page in a browser, it
1.downloads a document from the
requested url
2.downloads anything referenced by an
img, link, script,etc tag (anything
that references an external file)
3.executes javascript where applicable.
The WebClient class only performs step 1. It encapsulates a single http request and response. It does not contain a script engine, and does not, as far as I know, find image tags, etc that reference other files and initiate further requests to obtain those files.
If you want to get a page once it's been modified by an AJAX call and handler, you'll need to use a class that has the full capabilities of a web browser, which pretty much means using a web browser that you can somehow automate server-side. The WebBrowser control does this, but it's for WinForms only, I think. I shudder to think of the security issues here, or the demand that would be placed on the server if multiple users are taking advantage of this facility simultaneously.
A better question to ask yourself is: why are you doing this? If the data you're really interested in is being obtained via AJAX (probably through a web service), why not skip the webClient step and just go straight to the source?
Related
Is there a way to get the fully rendered html of a web page using WebClient instead of the page source? I'm trying to scrape some data from the page's html. My current code is like this:
WebClient client = new WebClient();
var result = client.DownloadString("https://somepageoutthere.com/");
//using CsQuery
CQ dom = result;
var someElementHtml = dom["body > main];
WebClient will only return the URL you requested. It will not run any javacript on the page (which runs on the client) so if javascript is changing the page DOM in any way, you will not get that through webclient.
You are better off using some other tools. Look for those that will render the HTML and javascript in the page.
I don't know what you mean by "fully rendered", but if you mean "with all data loaded by ajax calls", the answer is: no, you can't.
The data which is not present in the initial html page is loaded through javascript in the browser, and WebClient has no idea what javascript is, and cannot interpret it, only browsers do.
To get this kind of data, you need to identify these calls (if you don't know the url of the data webservice, you can use tools like Fiddler), simulate/replay them from your application, and then, if successful, get response data, and extract data from it (will be easy if data comes as json, and more tricky if it comes as html)
better use http://html-agility-pack.net
it has all the functionality to scrap web data and having good help on the site
I am developing a local server using self-hosted ServiceStack. I hardcoded a demo webpage and allow it to be accessed at localhost:8080/page:
public class PageService : IService<Page>
{
public object Execute(Page request)
{
var html = System.IO.File.ReadAllText(#"demo_chat2.html");
return html;
}
}
// set route
public override void Configure(Container container)
{
Routes
.Add<Page>("/page")
.Add<Hello>("/hello")
.Add<Hello>("/hello/{Name}");
}
It works fine for Chrome/Firefox/Opera, however, IE would treat the url as a download request and promote "Do you want to open or save page from localhost?"
What shall I do to let IE treat the url as a web page? (I already added doctype headers to the demo page; but that cannot prevent IE from treating it as a download request.)
EDIT.
Ok. I used Fiddler to check the response when accessing localhost. The responses that IE and Firefox get are exactly the same. And in the header the content type is written as:
Content-Type: text/html,text/html
Firefox treats this content type as text/html, however IE does not recognize this content type (it only recognizes a single text/html)!
So this leads me to believe that this is due to a bug in SS.
Solution
One solution is to explicitly set the content type:
return new HttpResult(
new MemoryStream(Encoding.UTF8.GetBytes(html)), "text/html");
I don't know what is your exact problem is. If you want to serve html page, there is a different way to do that.
Servicestack support razor engine as plugin that is useful if you like to serve html page and also you can bind data with it. Different way of doing this is explain here. razor.servicestack.net . This may be useful. Let me know if you need any additional details.
I am using this code:
HttpWebResponse objHttpWebResponse = (HttpWebResponse)objHttpWebRequest.GetResponse();
return new StreamReader(objHttpWebResponse.GetResponseStream()).ReadToEnd();
I get the page content successfully, but my problem is that there are some dynamic content that are populated by javascript functions on the page and it seems that the content is fetched before those functions finished executing, so those parts of the page are returned not populated with data, is there any way to solve this "Wait for page until it's completely loaded including all contents".
Edit:
Regarding "#ElDog" answer, i tried the following code but with no luck to:
WebBrowser objWebBrowser = new WebBrowser();
objWebBrowser.DocumentCompleted += objWebBrowser_DocumentCompleted;
objWebBrowser.Navigate(url);
and at the document complete event i executed the following code:
string content = ((WebBrowser)(sender)).Document.Body.InnerHtml;
But still the javascript functions didn't execute.
HttpWebRequest is not going to execute java scripts at all. It just gives you what a web browser gets in response. To execute java scripts you would need a web browser emulation in your code.
I am using the code,
string loadFile = HttpContext.Current.Request.Url.AbsoluteUri;
// this.Response.ClearContent();
// this.Response.ClearHeaders();
this.Response.AppendHeader("content-disposition", "attachment; filename " + filename);
this.Response.ContentType ="application/html";
this.Response.WriteFile("C:\\Users\\Desktop\\Jobspoint Website\\jobpoint3.0\\print.aspx");
this.Response.Flush();
this.Response.Close();
this.Response.End();
to download an aspx page in asp.net C#.. But its only showing the html tags and static values... How can I save the entire page without html tags and with the values that retrieved from the database?
Thanks...
Leema
Use WebClient for this. It will download your file.
If I have understood correctly, one option would be to actually make a request to the web server using WebClient for example. And then write the response to that request to the Response.OutputStream. This means that the server will actually make a second request to it self and then send the response to the second request back to the client.
This way you will have the web server actually process the request and return the resulting HTML back to you rather than just the raw aspx page.
As some of you may know, you are able to POST with C#. This means you can "push" buttons on a website with webrequest/response. Now there are also buttons on sites which work with javascript, they start like:
(function($j){
$j.data(document, 'maxPictureSize', 764327);
share_init();
})(jQuery.noConflict());
Is there any solution you can make those function calls in C# with like httprequests or any other kind of library?
I'm assuming you have a program that wants to manipulate the server "back end" for a web page by making the server think that someone pushed a button that POSTs, and sending the data that the web page would include with its POST.
The first tool you need is Microsoft Network Monitor 3.3, or another network packet tracing tool. Use this to look at the POST from the real web page. NetMon (at least) decomposes the packet into the HTTP pieces and headers, so you can easily see what's going on.
Now you will know what data the real POST is sending, and the URL to which it is sending the data (with any possible "query string" - which is unusual for a POST).
Next you need to write C# to create the same sort of POST to the same URL. It seems that you already know about HttpWebRequest/HttpWebResponse so I won't explain them in detail. You may have noticed in your NetMon trace that the Content-Type header was application/x-www-form-urlencoded. This is most often data from an HTML form which is URL-Encoded (like the name), so you need to URL-Encode your data before POSTing it, and you need to know the size of the encoded data for the Content-Length. HttpUtility.UrlEncode() is one method to use for this encoding.
Once you think you have it, try it and use NetMon to inspect your POST request and the response from the server. Keep going until you have duplicated what the mystery web page is doing.
Ok use webBrowser form to load the page:
webBrowser.Navigate( url );
then save the contents of the web broweser form to a file or a string:
File.WriteAllText(#"c:\test\ajax_test.txt", webBrowser1.Document.Body.Parent.OuterHtml, Encoding.GetEncoding(webBrowser1.Document.Encoding));
now if you look to the txt file it should have the html tags you look for.
Even when using JavaScript to do a POST there is a POST somewhere in the JS which works the same way as button submit. You just have to dig to the place where the JS code posts and see how it does it. Then craft the same post in C#.
Take for example ASP.NET's own __doPostBack function
var theForm = document.forms['aspnetForm'];
if (!theForm) {
theForm = document.aspnetForm;
}
function __doPostBack(eventTarget, eventArgument) {
if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
theForm.__EVENTTARGET.value = eventTarget;
theForm.__EVENTARGUMENT.value = eventArgument;
theForm.submit();
}
}
You can see that it digs for the form sets several values for input fields and does a submit. Basically you need to fill the same values for the inputs and submit the same form and you've got the JS submit done yourself.
You need to capture the requests and headers those buttons are sending and simulate them with HttpWebRequest. You could also take a look at WatiN if you want to automate user actions on web sites.