C# HttpRequest posting a URL - c#

I'm trying to get an HttpRequest to post this URL
https://www.iformbuilder.com/exzact/_emptyTable.php?PAGE_ID=1234&TABLE_NAME=table_name_here&USERNAME=yo#yo.com&PASSWORD=What!What!
I've tried using
WebClient rar = new WebClient();
rar.OpenReadAsync(new Uri(#"https://www.iformbuilder.com/exzact/_emptyTable.php?PAGE_ID=1234&TABLE_NAME=table_name_here&USERNAME=yo#yo.com&PASSWORD=What!What!"));
rar.DownloadStringAsync(new Uri(#"https://www.iformbuilder.com/exzact/_emptyTable.php?PAGE_ID=1234&TABLE_NAME=table_name_here&USERNAME=yo#yo.com&PASSWORD=What!What!"));
This is suppose to delete my information on their site, but it's not taking. i'm following this documentation.
http://getsatisfaction.com/exzact/topics/how_can_we_delete_old_records_not_manually
and they state that all I have to do is paste the proper URL into a web browser and hit enter and it will work. How would I do this equivalent in c#? Any help would be awesome! Thanks!

Use WebClient.DownloadString instead of DownloadStringAsync. Async indicates asynchronous methods that do not block the current thread.

Try setting your User-Agent header in your WebClient before you submit it to see if that fixes things.
rar.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)")
A lot of web servers are set up to simply ignore requests if the User-Agent header is missing.
Subsequently, you're using HTTPS here so you'll want to set up your ServicePointManager.ServerCertificateValidationCallback as well.

Try to use System.Web classes, for example like this:
HttpWebRequest req = null;
HttpWebResponse resp = null;
try
{
req = (HttpWebRequest)HttpWebRequest.Create(url); // enter your url
req.Method = "post";
resp = (HttpWebResponse)req.GetResponse();
}
catch (Exception)
{
throw;
}
It's example for post method, you can use any other HTTP method like this. Check the documentation.

This isn't a direct answer to your question but check out Hammock for REST

string uriString = #"https://www.iformbuilder.com/exzact/_emptyTable.php?PAGE_ID=1234&TABLE_NAME=table_name_here&USERNAME=yo#yo.com&PASSWORD=What!What!";
using (WebClient webClient = new WebClient { Encoding = Encoding.UTF8 })
{
try
{
string content = webClient.DownloadString(uriString);
//do stuff with the answer you got back from the site
}
catch (Exception exception)
{
//handle exceptions
}
}

Related

Can't able to Download HTML of a Specific Website

I am doing webparsing using C# Console Application.
My code is:
var req = WebRequest.Create("http://watch.squidtv.net/");
req.BeginGetResponse(r =>
{
var response = req.EndGetResponse(r);
var stream = response.GetResponseStream();
var reader = new StreamReader(stream, true);
var str = reader.ReadToEnd();
Console.WriteLine(str);
}, null);
This Code is runing fine with other URLs but when I changed URL to http://watch.squidtv.net/ then two problems occurred-
First one- It is not downloading its html.
Second one- Its generates a sound of CPU.
Then I changed the code and used webClient like this -
string htmlCode = "";
htmlCode = client.DownloadString("http://watch.squidtv.net");
Console.WriteLine(htmlCode);
But the problem is same :(
what can be the problem ???
I found the the Solution
the probelm was HTML header in HTML header there is gzip object Encoding the httpwebrequest is not accepting the gzip header which causing the problem when i used this code the problem solved
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("http://watch.squidtv.net/");
req.Headers[HttpRequestHeader.AcceptEncoding] = "gzip, deflate";
req.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
req.Method = "GET";
req.UserAgent = "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))";
string htmlCode;
using (StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream()))
{
htmlCode = reader.ReadToEnd();
}
Possibly you'll have to specify more in your WebRequest so that the SquidTV server can know to send you back the HTML for one idea.
Consider that in a browser there are lots of headers that get sent to the server. If you want to take a look use Fiddler or WireShark to see all the extra data that gets sent.
Firewalls could be another issue as you are sending out a request that may not be allowed and thus nothing is coming back. This would be where I'd likely suggest intermediate tools like WireShark or Fiddler that may be useful in seeing if the request is getting out at least.

matweb.com: How to get source of page?

I have url like:
http://www.matweb.com/search/DataSheet.aspx?MatGUID=849e2916ab1541be9ff6a17b78f95c82
I want to download source code from that page using this code:
private static string urlTemplate = #"http://www.matweb.com/search/DataSheet.aspx?MatGUID=";
static string GetSource(string guid)
{
try
{
Uri url = new Uri(urlTemplate + guid);
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.Method = "GET";
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
Stream responseStream = webResponse.GetResponseStream();
StreamReader responseStreamReader = new StreamReader(responseStream);
String result = responseStreamReader.ReadToEnd();
return result;
}
catch (Exception ex)
{
return null;
}
}
When I do so I get:
You do not seem to have cookies enabled. MatWeb Requires cookies to be enabled.
Ok, that I understand, so I added lines:
CookieContainer cc = new CookieContainer();
webRequest.CookieContainer = cc;
I got:
Your IP Address has been restricted due to excessive use. The problem may be compounded when an IP address may be shared by many people in a company or through an internet service provider. We apologize for any inconvenience.
I can understand this but I'm not getting this message when I try to visit this page using web browser. What can I do to get the source code? Some cookies or http headers?
It probably doesn't like your UserAgent. Try this:
webRequest.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729)"; //maybe substitute your own in here
It looks like you're doing something that the company doesn't like, if you got an "excessive use" response.
You are downloading pages too fast.
When you use a browser you might get up to one page per second. Using a application you can get several pages per second and that's probably what their web server is detecting. Hence the excessive usage.

Get web page contents from Firefox in a C# program

I need to write a simple C# app that should receive entire contents of a web page currently opened in Firefox. Is there any way to do it directly from C#? If not, is it possible to develop some kind of plug-in that would transfer page contents? As I am a total newbie in Firefox plug-ins programming, I'd really appreciate any info on getting me started quickly. Maybe there are some sources I can use as a reference? Doc links? Recommendations?
UPD: I actually need to communicate with a Firefox instance, not get contents of a web page from a given URL
It would help if you elaborate What you are trying to achieve. May be plugins already out there such as firebug can help.
Anways, if you really want to develop both plugin and C# application:
Check out this tutorial on firefox extension:
http://robertnyman.com/2009/01/24/how-to-develop-a-firefox-extension/
Otherwise, You can use WebRequest or HttpWebRequest class in .NET request to get the HTML source of any URL.
I think you'd almost certainly need to write a Firefox plugin for that. However there are certainly ways to request a webpage, and receive its HTML response within C#. It depends on what your requirements are?
If you're requirements are simply receive the source from any website, leave a comment and I'll point you towards the code.
Uri uri = new Uri(url);
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(uri.AbsoluteUri);
req.AllowAutoRedirect = true;
req.MaximumAutomaticRedirections = 3;
//req.UserAgent = _UserAgent; //"Mozilla/6.0 (MSIE 6.0; Windows NT 5.1; Searcharoo.NET)";
req.KeepAlive = true;
req.Timeout = _RequestTimeout * 1000; //prefRequestTimeout
// SIMONJONES http://codeproject.com/aspnet/spideroo.asp?msg=1421158#xx1421158xx
req.CookieContainer = new System.Net.CookieContainer();
req.CookieContainer.Add(_CookieContainer.GetCookies(uri));
System.Net.HttpWebResponse webresponse = null;
try
{
webresponse = (System.Net.HttpWebResponse)req.GetResponse();
}
catch (Exception ex)
{
webresponse = null;
Console.Write("request for url failed: {0} {1}", url, ex.Message);
}
if (webresponse != null)
{
webresponse.Cookies = req.CookieContainer.GetCookies(req.RequestUri);
// handle cookies (need to do this incase we have any session cookies)
foreach (System.Net.Cookie retCookie in webresponse.Cookies)
{
bool cookieFound = false;
foreach (System.Net.Cookie oldCookie in _CookieContainer.GetCookies(uri))
{
if (retCookie.Name.Equals(oldCookie.Name))
{
oldCookie.Value = retCookie.Value;
cookieFound = true;
}
}
if (!cookieFound)
{
_CookieContainer.Add(retCookie);
}
}
string enc = "utf-8"; // default
if (webresponse.ContentEncoding != String.Empty)
{
// Use the HttpHeader Content-Type in preference to the one set in META
doc.Encoding = webresponse.ContentEncoding;
}
else if (doc.Encoding == String.Empty)
{
doc.Encoding = enc; // default
}
//http://www.c-sharpcorner.com/Code/2003/Dec/ReadingWebPageSources.asp
System.IO.StreamReader stream = new System.IO.StreamReader
(webresponse.GetResponseStream(), System.Text.Encoding.GetEncoding(doc.Encoding));
webresponse.Close();
This does what you want.
using System.Net;
var cli = new WebClient();
string data = cli.DownloadString("http://www.heise.de");
Console.WriteLine(data);
Native messaging enables an extension to exchange messages with a native application installed on the user's computer.

C# Check url exist?

How can I check whether a page exists at a given URL?
I have this code:
private void check(string path)
{
try
{
Uri uri = new Uri(path);
WebRequest request = WebRequest.Create(uri);
request.Timeout = 3000;
WebResponse response;
response = request.GetResponse();
}
catch(Exception loi) { MessageBox.Show(loi.Message); }
}
But that gives an error message about the proxy. :(
First, you need to understand that your question is at least twofold,
you must first check if the server is responsive, using ping for example - that's the first check, while doing this, consider timeout, for which timeout you will consider a page as not existing?
second, try retrieving the page using many methods which are available on google, again, you need to consider the timeout, if the server taking long to replay, the page might still "be there" but the server is just under tons of pressure.
If the proxy needs to authenticate you with your Windows credentials (e.g. you are in a corporate network) use:
WebRequest request=WebRequest.Create(url);
request.UseDefaultCredentials=true;
request.Proxy.Credentials=request.Credentials;
try
{
Uri uri = new Uri(path);
HttpWebRequest request = HttpWebRequest.Create(uri);
request.Timeout = 3000;
HttpWebResponse response;
response = request.GetResponse();
if (response.StatusCode.Equals(200))
{
// great - something is there
}
}
catch (Exception loi)
{
MessageBox.Show(loi.Message);
}
You can check the content-type and length, see MSDN HTTPWebResponse.
At a guess, without knowing the specific error message or path, you could try casting the WebRequest to a HttpWebRequest and then setting the WebProxy.
See MSDN: HttpWebRequest - Proxy Property

Communicating with the web through a C# app?

Although i can grasp the concepts of the .Net framework and windows apps, i want to create an app that will involve me simulating website clicks and getting data/response times from that page. I have not had any experience with web yet as im only a junior, could someone explain to me (in english!!) the basic concepts or with examples, the different ways and classes that could help me communicate with a website?
what do you want to do?
send a request and grab the response in a String so you can process?
HttpWebRequest and HttpWebResponse will work
if you need to connect through TCP/IP, FTP or other than HTTP then you need to use a more generic method
WebRequest and WebResponse
All the 4 methods above are in System.Net Namespace
If you want to build a Service in the web side that you can consume, then today and in .NET please choose and work with WCF (RESTfull style).
hope it helps you finding your way :)
as an example using the HttpWebRequest and HttpWebResponse, maybe some code will help you understand better.
case: send a response to a URL and get the response, it's like clicking in the URL and grab all the HTML code that will be there after the click:
private void btnSendRequest_Click(object sender, EventArgs e)
{
textBox1.Text = "";
try
{
String queryString = "user=myUser&pwd=myPassword&tel=+123456798&msg=My message";
byte[] requestByte = Encoding.Default.GetBytes(queryString);
// build our request
WebRequest webRequest = WebRequest.Create("http://www.sendFreeSMS.com/");
webRequest.Method = "POST";
webRequest.ContentType = "application/xml";
webRequest.ContentLength = requestByte.Length;
// create our stram to send
Stream webDataStream = webRequest.GetRequestStream();
webDataStream.Write(requestByte, 0, requestByte.Length);
// get the response from our stream
WebResponse webResponse = webRequest.GetResponse();
webDataStream = webResponse.GetResponseStream();
// convert the result into a String
StreamReader webResponseSReader = new StreamReader(webDataStream);
String responseFromServer = webResponseSReader.ReadToEnd().Replace("\n", "").Replace("\t", "");
// close everything
webResponseSReader.Close();
webResponse.Close();
webDataStream.Close();
// You now have the HTML in the responseFromServer variable, use it :)
textBox1.Text = responseFromServer;
}
catch (Exception ex)
{
textBox1.Text = ex.Message;
}
}
The code does not work cause the URL is fictitious, but you get the idea. :)
You could use the System.Net.WebClient class of the .NET Framework. See the MSDN documentation here.
Simple example:
using System;
using System.Net;
using System.IO;
public class Test
{
public static void Main (string[] args)
{
if (args == null || args.Length == 0)
{
throw new ApplicationException ("Specify the URI of the resource to retrieve.");
}
WebClient client = new WebClient ();
// Add a user agent header in case the
// requested URI contains a query.
client.Headers.Add ("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
Stream data = client.OpenRead (args[0]);
StreamReader reader = new StreamReader (data);
string s = reader.ReadToEnd ();
Console.WriteLine (s);
data.Close ();
reader.Close ();
}
}
There are other useful methods of the WebClient, which allow developers to download and save resources from a specified URI.
The DownloadFile() method for example will download and save a resource to a local file. The UploadFile() method uploads and saves a resource to a specified URI.
UPDATE:
WebClient is simpler to use than WebRequest. Normally you could stick to using just WebClient unless you need to manipulate requests/responses in an advanced way. See this article where both are used: http://odetocode.com/Articles/162.aspx

Categories

Resources