Can i use like webclient or webrequest (I dont know) to do the following:
click a button, once its click it send a string lets say "Hello" to a website's textbox,
for example: button1 click, write "Hello" on textbox1 on website http://Test.com
There two general ways to deal with websites - either you are speaking straight HTTP in your C# program (this is WebRequest) or you use COM/Interop to control a browser.
If you are looking at HTML, then you need to use Interop to remote control a browser. Other alternatives to look at are Selenium.
WebClient is a class in System.Net namespace
you can download the content throught webclient by writing a method like
public static string downloadcontent(string urlofpage)
{
WebClient client = new WebClient();
string content = client.DownloadString(urlofpage);
return content;
}
this method return you a page who you want to download.
if you need something else then tell me by comment
Related
Is there a way to get the fully rendered html of a web page using WebClient instead of the page source? I'm trying to scrape some data from the page's html. My current code is like this:
WebClient client = new WebClient();
var result = client.DownloadString("https://somepageoutthere.com/");
//using CsQuery
CQ dom = result;
var someElementHtml = dom["body > main];
WebClient will only return the URL you requested. It will not run any javacript on the page (which runs on the client) so if javascript is changing the page DOM in any way, you will not get that through webclient.
You are better off using some other tools. Look for those that will render the HTML and javascript in the page.
I don't know what you mean by "fully rendered", but if you mean "with all data loaded by ajax calls", the answer is: no, you can't.
The data which is not present in the initial html page is loaded through javascript in the browser, and WebClient has no idea what javascript is, and cannot interpret it, only browsers do.
To get this kind of data, you need to identify these calls (if you don't know the url of the data webservice, you can use tools like Fiddler), simulate/replay them from your application, and then, if successful, get response data, and extract data from it (will be easy if data comes as json, and more tricky if it comes as html)
better use http://html-agility-pack.net
it has all the functionality to scrap web data and having good help on the site
Without the user seeing the browser window? I want the program to search for something secretly on the internet and copy the text of the first website as the search result.
You can use the I'm Feeling Lucky URL and an HttpWebRequest:
public Uri GetFirstResult(string term)
{
// Use the I'm Feeling Lucky URL
var url = string.Format("https://www.google.com/search?num=100&site=&source=hp&q={0}&btnI=1", term);
HttpWebRequest req = HttpWebRequest.CreateHttp(url);
return req.GetResponse().ResponseUri;
}
You need to use Goodle API for that.
https://code.google.com/p/google-api-for-dotnet/
I have this problem - I am writing a simple web spider and it works good so far. Problem is the site I am working on has the nasty habit of redirecting or adding stuff to the address sometimes. In some pages it adds "/about" after you load them and on some it totally redirects to another page.
The webclient gets confused since it downloads the html code and starts to parse the links, but since many of them are in the format "../../something", it simply crashes after a while, because it calculates the link according to the first given address(before redirecting or adding "/about"). When the newly created page comes out of the queue it throws 404 Not Found exception(surpriiise).
Now I can just add "/about" to every page myself, but for shits and giggles, the website itself doesn't always add it...
I would appreciate any ideas.
Thank you for your time and all best!
If you want to get the redirected URI of a page for parsing the links inside it, use a subclass of WebClient like this:
class MyWebClient : WebClient
{
Uri _responseUri;
public Uri ResponseUri
{
get { return _responseUri; }
}
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse response = base.GetWebResponse(request);
_responseUri = response.ResponseUri;
return response;
}
}
Now use MyWebClient instead of WebClient and parse the links using ResponseUri
I'm downloading a web site using WebClient
public void download()
{
client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client.Encoding = Encoding.UTF8;
client.DownloadStringAsync(new Uri(eUrl.Text));
}
void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
SaveFileDialog sd = new SaveFileDialog();
if (sd.ShowDialog() == DialogResult.OK)
{
StreamWriter writer = new StreamWriter(sd.FileName,false,Encoding.Unicode);
writer.Write(e.Result);
writer.Close();
}
}
This works fine. But I am unable to read content that is loaded using ajax. Like this:
<div class="center-box-body" id="boxnews" style="width:768px;height:1167px; ">
loading .... </div>
<script language="javascript">
ajax_function('boxnews',"ajax/category/personal_notes/",'');
</script>
This "ajax_function" downloads data from server on the client side.
How can I download the full web html data?
To do so, you would need to host a Javascript runtime inside of a full-blown web browser. Unfortunately, WebClient isn't capable of doing this.
Your only option would be automation of a WebBrowser control. You would need to send it to the URL, wait until both the main page and any AJAX content has been loaded (including triggering that load if user action is required to do so), then scrape the entire DOM.
If you are only scraping a particular site, you are probably better off just pulling the AJAX URL yourself (simulating all required parameters), rather than pulling the web page that calls for it.
I think you'd need to use a WebBrowser control to do this since you actually need the javascript on the page to run to complete the page load. Depending on your application this may or may not be possible for you -- note it's a Windows.Forms control.
When you visit a page in a browser, it
1.downloads a document from the
requested url
2.downloads anything referenced by an
img, link, script,etc tag (anything
that references an external file)
3.executes javascript where applicable.
The WebClient class only performs step 1. It encapsulates a single http request and response. It does not contain a script engine, and does not, as far as I know, find image tags, etc that reference other files and initiate further requests to obtain those files.
If you want to get a page once it's been modified by an AJAX call and handler, you'll need to use a class that has the full capabilities of a web browser, which pretty much means using a web browser that you can somehow automate server-side. The WebBrowser control does this, but it's for WinForms only, I think. I shudder to think of the security issues here, or the demand that would be placed on the server if multiple users are taking advantage of this facility simultaneously.
A better question to ask yourself is: why are you doing this? If the data you're really interested in is being obtained via AJAX (probably through a web service), why not skip the webClient step and just go straight to the source?
I have one site that is displaying html content that needs to be displayed on another site (first site is front end to content management system).
So site1 page = http://site1/pagecontent
and on site 2 (http://site2/pagecontent) I need to be able to display the contents shown on site 1 page.
I am using .Net C# (MVC), and do not want to just use an iframe.
Is there a way to suck in the html into a string variable?
Yes, you can use the WebClient class: http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx
WebClient wClient = new WebClient();
string s = wClient.DownloadString(site2);
Sure. See the System.Net.WebClient class, specificially, the DownloadString() method.