I was asked in my internship to post create a form that submit information to a server, now I am to submit that form programmatically as many times as possible to test the site and try to see if it would break by receiving so much data at once. Am a newbie and I tried search how to do it but am still not getting it. This is what I tried.
namespace project_1
{
public partial class WebForm1 : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
WebRequest req = WebRequest.Create("http://localhost:68644/project-2");
string postData = "item1=11111&item2=22222&Item3=33333";
byte[] send = Encoding.Default.GetBytes(postData);
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ContentLength = send.Length;
Stream sout = req.GetRequestStream();
sout.Write(send, 0, send.Length);
sout.Flush();
sout.Close();
WebResponse res = req.GetResponse();
StreamReader sr = new StreamReader(res.GetResponseStream());
string returnvalue = sr.ReadToEnd();
}
}
}
p.s I got this idea from: How do you programmatically fill in a form and 'POST' a web page?
But I keep getting this error message: "The request was aborted: The operation has timed out"
Thanks for your anticipated help.
Ok I think you are misunderstanding what is going on when you request a page from asp.net or any other http server.
First the client requests a page using HTTP. Usually this is the initial GET request for the page like /MyForm.aspx made from a browser
Then the c# code get executed on the server. This creates the html page.
The client most likely browser gets the page that the server created and renders the html, JavaScript, css... This is done on the client machine this machine can't run c# if you are making a request from the browser.
Then the client fills in the fields in the form and once the submit button is pressed the browser sends another HTTP request from the client to the server now using the POST method. POST method is different then GET because it's intended to have a body section of HTTP. Inside that body portion you have your form data. Usualy the data is formatted as x-www-form-urlencoded string.
Then the server runs the server code for post.
Best way to see what is going on there is to use Fiddler it's an http proxy intended to show you http requests.
What your code is doing right now is creating the post request of the whole process. And the c# code works fine on this side. The error you are getting is related to what ever is running on project-2 since there is no response from the server.
This can be caused but many things (bad proxy setup, long running code on the server, dns issues...)
BUT most well written servers will prevent you from doing this by design. It's a security issue and should not be simple to do this. So you might be getting no response by design.
Related
I am working with the shift4shop web api. These guys used to be known as threeDCart if that helps anyone. Its an eCommerce platform.
we are trying to apply a promotion code to an open cart.
support has verified there is no api-way to do that.
there is an url that will apply the promotion. This is often emailed to customers so they can apply the promo if they choose to.
we can paste the correct url in chrome, brave, edge, or firefox and it correctly applies the promotion.
We used private tabs for the different browser tests and the browsers were 'cold'. we launched the browser and immediately entered the URL.
We are thinking this eliminates the possibility that there are cookies that are necessary.
https://www.mywebsite.com/continue_order.asp?orderkey=CDC886A7O4Srgyn278668&ApplyPromo=40pro
However, when I try to do this in C#, i get a response that is redirected a page that says 'The cart is empty'.
The promotion is not applied
I am stumped as to how the website would respond differently to the same URL when it comes from a browser as opposed to the c# system.net library.
here is the c# code I am using
using System.Net;
//i really create this using my data, but this is the resulting url
string url = "https://www.mywebsite.com/continue_order.asp?orderkey=CDC886A7O4Srgyn278668&ApplyPromo=40pro"
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)req.GetResponse();
string result = "";
using (StreamReader rdr = new StreamReader(response.GetResponseStream()))
{
result = rdr.ReadToEnd();
}
You can also call ".view_cart.asp" w the same parameters and the browsers will cause the promo to be applied.
I have tried setting the method to [ , GET, get ]
There has to be something about the request settings that are preventing this from working.
I do not know what else to try.
Any thoughts are appreciated.
As per shift4shop support, the continue_order.asp has a 302.
the browsers land on continue_order.asp and process that page.
They then continue on to view_order.asp.
The 2 pages together perform functionality that you can not get by just calling continue_order.asp
Thanks to savoy w/ shift4Shop for helping on that.
I'm using C# to download the HTML of a webpage, but when I check the actual code of the web page and my downloaded code, they are completely different. Here is the code:
public static string getSourceCode(string url) {
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
req.Method = "GET";
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
StreamReader sr = new StreamReader(resp.GetResponseStream());
string soruceCode = sr.ReadToEnd();
sr.Close();
resp.Close();
return soruceCode;
using (StreamReader sRead = new StreamReader(resp.GetResponseStream(), Encoding.UTF8)) {
// veriyi döndür
return sRead.ReadToEnd();
}
private void button1_Click(object sender, EventArgs e) {
string url = "http://www.booking.com/hotel/tr/nena.en-gb.html?label=gog235jc-hotel-en-tr-mina-nobrand-tr-com-T002-1;sid=fcc1c6c78f188a42870dcbe1cabf2fb4;dcid=1;origin=disamb;srhash=3938286438;srpos=5";
string sourceCode = Finder.getSourceCode(url);
StreamWriter sw = new StreamWriter("HotelPrice.txt");//Here the code are completly different with web page code.
sw.Write(sourceCode);
sw.Close();
#region //Get Score Value
int StartIndex = sourceCode.IndexOf("<strong id=\"rsc_total\">") + 23;
sourceCode = sourceCode.Substring(StartIndex, 3);
#endregion
}
Most likely the cause for the difference is that when you use the browser to request the same page it's part of a session which is not established when you request the same page using the WebRequest.
Looking at the URL it looks like that query parameter sid is a session identifier or a nonce of some sort. The page probably verifies that against the actually session id and when it determines that they are different it gives you some sort of "Ooopss.. wrong seesion" sort of response.
In order to mimic the browser's request you will have to make sure you generate the proper request which may need to include one or more of the following:
cookies (previously sent to you by the webserver)
a valid/proper user agent
some specific query parameters (again depending on what the page expects)
potentially a referrer URL
authentication credentials
The best way to determine what you need is to follow a conversation between your browser and the web server serving that page from start to finish and see exactly which pages are requested, what order and what information is passed back and forth. You can accomplish this using WireShark or Fidler - both free tools!
I ran into the same problem when trying to use HttpWebRequest to crawl a page, and the page used ajax to load all the data I was after. In order to get the ajax calls to occur I switched to the WebBrowser control.
This answer provides an example of how to use the control outside of a WinForms app. You'll want to hookup to the browser's DocumentCompleted event before parsing the page. Be warned, this event may fire multiple times before the page is ready to be parsed. You may want to add something like this
if(browser.ReadyState == WebBrowserReadyState.Complete)
to your event handler, to know when the page is completely done loading.
I'm trying to login to a website using C# and the WebRequest class. This is the code I wrote up last night to send POST data to a web page:
public string login(string URL, string postData)
{
Stream webpageStream;
WebResponse webpageResponse;
StreamReader webpageReader;
byte[] byteArray = Encoding.UTF8.GetBytes(postData);
_webRequest = WebRequest.Create(URL);
_webRequest.Method = "POST";
_webRequest.ContentType = "application/x-www-form-urlencoded";
_webRequest.ContentLength = byteArray.Length;
webpageStream = _webRequest.GetRequestStream();
webpageStream.Write(byteArray, 0, byteArray.Length);
webpageResponse = _webRequest.GetResponse();
webpageStream = webpageResponse.GetResponseStream();
webpageReader = new StreamReader(webpageStream);
string responseFromServer = webpageReader.ReadToEnd();
webpageReader.Close();
webpageStream.Close();
webpageResponse.Close();
return responseFromServer;
}
and it works fine, but I have no idea how I can modify it to send POST data to a login script and then save a cookie(?) and log in.
I have looked at my network transfers using Firebug on the websites login page and it is sending POST data to a URL that looks like this:
accountName=myemail%40gmail.com&password=mypassword&persistLogin=on&app=com-sc2
As far as I'm aware, to be able to use my account with this website in my C# app I need to save the cookie that the web server sends, and then use it on every request? Is this right? Or can I get away with no cookie at all?
Any help is greatly apprecated, thanks! :)
The login process depends on the concrete web site. If it uses cookies, you need to use them.
I recommend to use Firefox with some http-headers watching plugin to look inside headers how they are sent to your particular web site, and then implement it the same way in C#. I answered very similar question the day before yesterday, including example with cookies. Look here.
I've found more luck using the HtmlElement class to manipulate around websites.
Here is cross post to an example of how logging in through code would work (provided you're using a WebBrowser Control)
Basically, I'm trying to grab an EXE from CNet's Download.com
So i created web parser and so far all is going well.
Here is a sample link pulled directly from their site:
http://dw.com.com/redir?edId=3&siteId=4&oId=3001-20_4-10308491&ontId=20_4&spi=e6323e8d83a8b4374d43d519f1bd6757&lop=txt&tag=idl2&pid=10566981&mfgId=6250549&merId=6250549&pguid=PlvcGQoPjAEAAH5rQL0AAABv&destUrl=ftp%3A%2F%2F202.190.201.108%2Fpub%2Fryl2%2Fclient%2Finstaller-ryl2_v1673.exe
Here is the problem: When you attempt to download, it begins with HTTP, then redirects to an FTP site. I have tried .NET's WebClient and HttpWebRequest Objects, and it looks like Neither can support Redirects.
This Code Fails at GetResponse();
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://dw.com.com/redir");
WebResponse response = req.GetResponse();
Now, I also tried this:
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://dw.com.com/redir");
req.AllowAutoRedirect = false;
WebResponse response = req.GetResponse();
string s = new StreamReader(response.GetResponseStream()).ReadToEnd();
And it does not throw the error anymore, however variable s turns out to be an empty string.
I'm at a loss! Can anyone help out?
You can get the value of the "Location" header from the response.headers, and then create a new FtpWebRequest to download that resource.
in your first code snippet you will be redirected to a link using a different protocol (i.e it's no longer Http as in HttpWebRequest) so it fails du to a malformed http response.
In the second part you're no longer redirected and hence you don't receive a FTP response (which is not malform when interpreted as HTTP response).
You need to acquire FTP link,as ferozo wrote you can do this by getting the value of the header "location", and use a FtpWebRequest to access the file
I have a C# console app (.NET 2.0 framework) that does an HTTP post using the following code:
StringBuilder postData = new StringBuilder(100);
postData.Append("post.php?");
postData.Append("Key1=");
postData.Append(val1);
postData.Append("&Key2=");
postData.Append(val2);
byte[] dataArray = Encoding.UTF8.GetBytes(postData.ToString());
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create("http://example.com/");
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
httpRequest.ContentLength = dataArray.Length;
Stream requestStream = httpRequest.GetRequestStream();
requestStream.Write(dataArray, 0, dataArray.Length);
requestStream.Flush();
requestStream.Close();
HttpWebResponse webResponse = (HttpWebResponse)httpRequest.GetResponse();
if (httpRequest.HaveResponse == true) {
Stream responseStream = webResponse.GetResponseStream();
StreamReader responseReader = new System.IO.StreamReader(responseStream, Encoding.UTF8);
String responseString = responseReader.ReadToEnd();
}
The outputs from this are:
webResponse.ContentLength = -1
webResponse.ContentType = text/html
webResponse.ContentEncoding is blank
The responseString is HTML with a title and body.
However, if I post the same URL into a browser (http://example.com/post.php?Key1=some_value&Key2=some_other_value), I get a small XML snippet like:
<?xml version="1.0" ?>
<RESPONSE RESULT="SUCCESS"/>
with none of the same HTML as in the application. Why are the responses so different? I need to parse the returned result which I am not getting in the HTML. Do I have to change how I do the post in the application? I don't have control over the server side code that accepts the post.
If you are indeed supposed to use the POST HTTP method, you have a couple things wrong. First, this line:
postData.Append("post.php?");
is incorrect. You want to post to post.php, you don't want post the value "post.php?" to the page. Just remove this line entirely.
This piece:
... WebRequest.Create("http://example.com/");
needs post.php added to it, so...
... WebRequest.Create("http://example.com/post.php");
Again this is assuming you are actually supposed to be POSTing to the specified page instead of GETing. If you are supposed to be using GET, then the other answers already supplied apply.
You'll want to get an HTTP sniffer tool like Fiddler and compare the headers that are being sent from your app to the ones being sent by the browser. There will be something different that is causing the server to return a different response. When you tweak your app to send the same thing browser is sending you should get the same response. (It could be user-agent, cookies, anything, but something is surely different.)
I've seen this in the past.
When you run from a browser, the "User-Agent" in the header is "Mozilla ...".
When you run from a program, it's different and generally specific to the language used.
I think you need to use a GET request, instead of POST. If the url you're using has querystring values (like ?Key1=some_value&Key2=some_other_value) then it's expecting a GET. Instead of adding post values to your webrequest, just put this data in the querystring.
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create("http://example.com/?val1=" + val1 + "&val2=" + val2);
httpRequest.Method = "GET";
httpRequest.ContentType = "application/x-www-form-urlencoded";
....
So, the result you're getting is different when you POST the data from your app because the server-side code has a different output when it can't read the data it's expecting in the querystring.
In your code you a specify the POST method which sends the data to the PHP file without putting the data in the web address. When you put the information in the address bar, that is not the POST method, that is the GET method. The name may be confusing, but GET just means that the data is being sent to the PHP file through the web address, instead of behind the scenes, not that it is supposed to get any information. When you put the address in the browser it is using a GET.
Create a simple html form and specify POST as the method and your url as the action. You will see that the information is sent without appearing in the address bar.
Then do the same thing but specify GET. You will see the information you sent in the address bar.
I believe the problem has something to do with the way your headers are set up for the WebRequest.
I have seen strange cases where attempting to simulate a browser by changing headers in the request makes a difference to the server.
The short answer is that your console application is not a web browser and the web server of example.com is expecting to interact with a browser.
You might also consider changing the ContentType to be "multipart/form-data".
What I find odd is that you are essentially posting nothing. The work is being done by the query string. Therefore, you probably should be using a GET instead of a POST.
Is the form expecting a cookie? That is another possible reason why it works in the browser and not from the console app.