How to read HTTP POST Request Message Parameters in c# - c#

I am able to read the url and entire page but not able to read the HTTP POST Request Message Parameters in c#.
In my situation i am posting a post url to a site after they verify they send me a HTTP Post message with parameters like id.
here is my code in c#
HttpWebRequest request1 = (HttpWebRequest)WebRequest.Create(uri);
postsourcedata = "processing=true&Sal=5000000";
request1.Method = "POST";
request1.ContentType = "application/x-www-form-urlencoded";
request1.ContentLength = postsourcedata.Length;
request1.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)";
Stream writeStream1 = request1.GetRequestStream();
UTF8Encoding encoding1 = new UTF8Encoding();
byte[] bytes1 = encoding1.GetBytes(postsourcedata);
writeStream1.Write(bytes1, 0, bytes1.Length);
writeStream1.Close();
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader readStream = new StreamReader(responseStream, Encoding.UTF8);
string page = readStream.ReadToEnd();
//page.Close();
return page.ToString();
They are sending me request parameters like id and text , how to read these parameters on my side.I am posting to the website through a web service.
Can anyone help me with this?

If they are sending you an HTTP Post message that means that you either need to have a web server or something that understands HTTP protocol to handle the requests, correct?
What I mean is that by your description, it looks like they are sending you an HTTP Request to port 80 or port 443 (https) and you should have asp.net page to handle the request. Once they hit that page, you can simply do:
Request.Parameters("Id")
Request.Parameters("Text")
And so on.

Related

How to ignore a CloudFlare warning with HttpWebRequest?

Here i do a Post request and i know the address (i am not the owner) and it is not malicious, I just want to Post the request and get the desired response.
Web request code:
HttpWebRequest oHTTP = (HttpWebRequest)WebRequest.Create("https://some-random-website.com/");
string data = Uri.EscapeDataString(parameters);
oHTTP.Method = "POST";
oHTTP.ContentType = "application/x-www-form-urlencoded";
oHTTP.UserAgent = "Mozilla/5.0 (Windows NT 9; WOW64; rv:38.0) Firefox:40.1";
oHTTP.ContentLength = parameters.Length;
using (Stream stream = oHTTP.GetRequestStream())
stream.Write(Encoding.ASCII.GetBytes(parameters), 0, parameters.Length);
HttpWebResponse response = (HttpWebResponse)oHTTP.GetResponse();
string oReceived = new StreamReader(response.GetResponseStream() ?? throw new InvalidOperationException()).ReadToEnd();
Response title:
Warning: Suspected Phishing Site Ahead!
Then there is a button that says:
Dismiss this warning and enter site
So my question is how can i ignore this warnings and post my request successfully? Should i change my UserAgent?
Note1: I use Fiddler to inspect both request and response header and content.
Note2: I have done the same thing in AutoIt but it uses WinHttp and there is no issue on this website.

Can't able to Download HTML of a Specific Website

I am doing webparsing using C# Console Application.
My code is:
var req = WebRequest.Create("http://watch.squidtv.net/");
req.BeginGetResponse(r =>
{
var response = req.EndGetResponse(r);
var stream = response.GetResponseStream();
var reader = new StreamReader(stream, true);
var str = reader.ReadToEnd();
Console.WriteLine(str);
}, null);
This Code is runing fine with other URLs but when I changed URL to http://watch.squidtv.net/ then two problems occurred-
First one- It is not downloading its html.
Second one- Its generates a sound of CPU.
Then I changed the code and used webClient like this -
string htmlCode = "";
htmlCode = client.DownloadString("http://watch.squidtv.net");
Console.WriteLine(htmlCode);
But the problem is same :(
what can be the problem ???
I found the the Solution
the probelm was HTML header in HTML header there is gzip object Encoding the httpwebrequest is not accepting the gzip header which causing the problem when i used this code the problem solved
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("http://watch.squidtv.net/");
req.Headers[HttpRequestHeader.AcceptEncoding] = "gzip, deflate";
req.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
req.Method = "GET";
req.UserAgent = "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))";
string htmlCode;
using (StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream()))
{
htmlCode = reader.ReadToEnd();
}
Possibly you'll have to specify more in your WebRequest so that the SquidTV server can know to send you back the HTML for one idea.
Consider that in a browser there are lots of headers that get sent to the server. If you want to take a look use Fiddler or WireShark to see all the extra data that gets sent.
Firewalls could be another issue as you are sending out a request that may not be allowed and thus nothing is coming back. This would be where I'd likely suggest intermediate tools like WireShark or Fiddler that may be useful in seeing if the request is getting out at least.

How to login a website using httpwebrequest via my web app or generic handler and access the content?

Basically I am making a chat app for my university students only and for that I have to make sure they are genuine by checking there details on UMS(university management system) and get their basic detail so they chat genuinely. I am nearly done with my chat app only the login is left.
So I want to login to my UMS page via my website from a generic handler.
and then navigate to another page in it to access there basic info keeping the session alive.
I did research on httpwebrequest and failed to login with my credentials.
https://ums.lpu.in/lpuums
(made in asp.net)
I did tried codes from other posts for login.
I am novice to this part so bear with me.. any help will be appreciated.
Without the actual handshake with UMS via a defined API, you would end up scraping UMS html, which is bad for various reasons.
I would suggest you read up on Single Sign On (SSO).
A few articles on SSO and ASP.NET -
1. Codeproject
2. MSDN
3. asp.net forum
Edit 1
Although, I think this is a bad idea, since you say you are out of options, here is a link that shows how Html Agility Pack can help in scraping the web pages.
Beware of the drawbacks of screen scraping, changes from UMS will not be communicated to you, and you will see your application not working all of a sudden.
public string Scrap(string Username, string Password)
{
string Url1 = "https://www.example.com";//first url
string Url2 = "https://www.example.com/login.aspx";//secret url to post request to
//first request
CookieContainer jar = new CookieContainer();
HttpWebRequest request1 = (HttpWebRequest)WebRequest.Create(Url1);
request1.CookieContainer = jar;
//Get the response from the server and save the cookies from the first request..
HttpWebResponse response1 = (HttpWebResponse)request1.GetResponse();
//second request
string postData = "***viewstate here***";//VIEWSTATE
HttpWebRequest request2 = (HttpWebRequest)WebRequest.Create(Url2);
request2.CookieContainer = jar;
request2.KeepAlive = true;
request2.Referer = Url2;
request2.Method = WebRequestMethods.Http.Post;
request2.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
request2.UserAgent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2";
request2.ContentType = "application/x-www-form-urlencoded";
request2.AllowWriteStreamBuffering = true;
request2.ProtocolVersion = HttpVersion.Version11;
request2.AllowAutoRedirect = true;
byte[] byteArray = Encoding.ASCII.GetBytes(postData);
request2.ContentLength = byteArray.Length;
Stream newStream = request2.GetRequestStream(); //open connection
newStream.Write(byteArray, 0, byteArray.Length); // Send the data.
newStream.Close();
HttpWebResponse response2 = (HttpWebResponse)request2.GetResponse();
using (StreamReader sr = new StreamReader(response2.GetResponseStream()))
{
responseData = sr.ReadToEnd();
}
return responseData;
}
This is the code which works for me any one can add there links and viewstate for asp.net websites to scrap and you need to take care of cookie too.
and for other websites(non asp.net) they don't require viewstate.
Use fiddler to find things needed to add in header and viewstate or cookie.
Hope this helps if some one having the problem. :)

HttpWebRequest, c# and Https

I have tried many ways to login to an https website programmatically, but I am having issues. Every time I get an error stating that my login and password are incorrect. I am sure they are correct because I can login to the site via the browser using the same credentials.
Failing Code
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("https://www.majesticseo.com/account/login?EmailAddress=myemail&Password=mypass&RememberMe=1");
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20100101 Firefox/8.0";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,**;q=0.8";
request.UnsafeAuthenticatedConnectionSharing = true;
request.Method = "POST";
request.KeepAlive = true;
request.ContentType = "application/x-www-form-urlencoded";
request.AllowAutoRedirect = true;
request.CookieContainer = container;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
//String tmp;
foreach(Cookie cookie1 in response.Cookies)
{
container.Add(cookie1);
}
Stream stream = response.GetResponseStream();
string html = new StreamReader(stream).ReadToEnd();
Console.WriteLine("" + html);
That site uses HTTP POST for login, and does not send the username and password in the URL.
The correct login URL is https://www.majesticseo.com/account/login
You need to create a string of data to post, convert it to a byte array, set the content length and then do your request. It is very important that the content-length is sent. Without it the post will not work.
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("https://www.majesticseo.com/account/login?EmailAddress=myemail&Password=mypass&RememberMe=1");
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20100101 Firefox/8.0";
request.Referer = "https://www.majesticseo.com/account/login";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,**;q=0.8";
request.UnsafeAuthenticatedConnectionSharing = true;
request.Method = "POST";
request.KeepAlive = true;
request.ContentType = "application/x-www-form-urlencoded";
request.AllowAutoRedirect = true;
// the post string for login form
string postData = "redirect=&EmailAddress=EMAIL&Password=PASS";
byte[] postBytes = System.Text.Encoding.ASCII.GetBytes(postData);
request.ContentLength = postBytes.Length;
System.IO.Stream str = request.GetRequestStream();
str.Write(postBytes, 0, postBytes.Length);
str.Close();
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
System.IO.Stream stream = response.GetResponseStream();
string html = new System.IO.StreamReader(stream).ReadToEnd();
Console.WriteLine("" + html);
You are trying to post something (I don't see, what, from your code) but not credentials. I guess that your web page shows you a web form where you enter username (email address?) and password. Then the browsers posts this form. Consequently you need to replicate browser behavior - encode form contents and send them in your post request. Use some webmaster developer tools for popular browsers to see what exactly the client browser sends to the server and how it encodes form data. Next, it's very likely that your request requires special cookies which you can collect by visiting another page (eg. login page). Sending preset cookies (like you do in commented code) won't work for most sites.
In other words, proper mechanism is:
GET the login web page
collect cookies
POST form data and pass collected cookies in the request.
collect other cookies, which could have been sent after login.

HttpWebRequest has empty response requesting a search from Bing

I have the following code that sends a HttpWebRequest to Bing. When I request the url below though it returns what appears to be an empty response when it should be returning a list of results.
var response = string.Empty;
var httpWebRequest = WebRequest.Create("http://www.bing.com/search?q=stackoverflow&count=100") as HttpWebRequest;
httpWebRequest.Method = WebRequestMethods.Http.Get;
httpWebRequest.Headers.Add("Accept-Language", "en-US");
httpWebRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Win32)";
httpWebRequest.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate");
using (var httpWebResponse = httpWebRequest.GetResponse() as HttpWebResponse)
{
Stream stream = null;
using (stream = httpWebResponse.GetResponseStream())
{
if (httpWebResponse.ContentEncoding.ToLower().Contains("gzip"))
stream = new GZipStream(stream, CompressionMode.Decompress);
else if (httpWebResponse.ContentEncoding.ToLower().Contains("deflate"))
stream = new DeflateStream(stream, CompressionMode.Decompress);
var streamReader = new StreamReader(stream, Encoding.UTF8);
response = streamReader.ReadToEnd();
}
}
Its pretty standard code for requesting and receiving a web page. Any ideas why the response is empty? Thanks in advance.
EDIT I left off a query string parameter in the url. I also had &count=100 which I have now corrected. It seems to work for values of 50 and below but returns nothing when larger. This works ok when in the browser, but not for this web request.
It makes me think the issue is that the response is large and HttpWebResponse is not handling that for me the way I have it set up. Just a guess though.
This works just fine on my machine. Perhaps you are IP banned from Bing?
Your code works fine on my machine.
I suggest you get yourself a copy of Fiddler and examine the actual HTTP sesssion occuring. May be a proxy or firewall thing.

Categories

Resources