I have a third party server that POSTs data to my C# Nancy console application with the following request;
POST //172.16.100.20 HTTP/1.1
User-Agent: P2000/3.6.0
Host: 172.16.100.20:40000
Server: remotesitename
Content-Type: text/xml
Content-Length: 2744
<MessageBase>
<BaseVersion>301</BaseVersion>
<MessageType>3</MessageType>
....
However I just can't seem to get Nancy to receive the request. I've tried various catchall routes such as
"(.*)"
"{uri*}"
"//172.16.100.20"
But none of them work with the above request (they generally work from a browser or Fiddler).
I've also tried hooking in to the module and application Before handlers, but they don't fire too.
The same code works if I use a correct request from my own test applications with ORIGIN as follows;
http://172.16.100.2
This simulated request was issued from the same third party server to the listening Nancy server over the local network and without firewalls or virus scanners.
Any ideas?
Cheers
Dave
Related
I have a C# / WCF service which needs to process a post request to a third party company. It sends documents to this outsourcing company. The C# code, use a HttpClient.PostAsync and works without any problems on Visual Studio 2013 with IIS Express. I deployed a self hosted WCF service on my server and it still works ! But, when I deploy this code on IIS (on the same server), I have the following exception : System.Web.HttpException (0x80004005): Forbidden.
Here are exception details :
HTTP Code [Forbidden]
Phrase="Forbidden"
Message="Method: POST, RequestUri: 'https://third.party.company/api/sandbox//users/73437a40-3827-4df6-855f-c58c00750007', Version: 1.1, Content: System.Net.Http.MultipartFormDataContent, Headers:
{
Accept: application/vnd.v1+json
Authorization: Bearer *****
Content-Type: multipart/form-data; boundary="ec9e6a56-103a-4728-a152-d86e836fe62e"
Content-Length: 138986
}" Result = [<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /api/sandbox//users/73437a40-3827-4df6-855f-c58c00750007
on this server.</p>
</body></html>
]
There can be two problems :
The post request is too big
There is an hidden setting in IIS which blocks the request
The server runs under Windows Server 2012 R2 and IIS 8.5
Can you help me ?
Many Thanks !
The problem comes from the two slash in the post URL :
https://third.party.company/api/sandbox//users/73437a40-3827-4df6-855f-c58c00750007
I works whitout problems with only one.
I do not understand, why there is no error on IIS logs and why the url is truncated before api instead of users
I am trying to handle a website programmatically. Lets say I visit the page www.example.com/something. On the website there is a button which I am pressing. The code of the button looks something like this:
<form action="/something" method="POST" enctype="text/plain">
<input type="submit" class="button" value="Click me" >
</form>
Pressing this button updates the information on the website.
Now I would like to do this procedure programatically to receive the content of the updated website after pressing the button.
Can someone lead me to the right direction on how to do this? preferably in C#.
Thank you in advance!
Edit:
I used Fiddler to capture the HTTP request and response, it looks like this:
POST /something HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://example.com/something
Cookie: cookie1=cookiecontent; cookie2=cookiecontent
Connection: keep-alive
Content-Type: text/plain
Content-Length: 0
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Thu, 05 Dec 2013 23:36:31 GMT
Content-Length: 2202
Although the requests includes cookies they don't appear to be relevant. I decompressed the received content with fiddler and found the wanted data to be included in the response.
I am not very experienced in HTTP requests and am therefore hoping that someone can help me convertion this into a C# http request to receive the content.
If the website in question is open and doesn't do any sort of cookie generation to validate requests (there are plenty of sites like this) then you can just use System.Net.WebRequest or similar to post the required form data, then examine the response. See this MSDN page for an example.
If the page does use cookies and so on you'll have to get a bit more creative. In some cases you can issue one web request to get the first page, examine the results for cookies and hidden form values and use those in your POST.
If all else fails then the Selenium WebDriver library will give you almost complete browser emulation with full access to the DOM. It's a bit more complex than using a WebRequest, but will work for pretty much everything you can use a web browser for.
Regardless of which method you use, Fiddler is a good debugging tool. Use it to compare what your C# code is doing to what the web browser is doing to see if there's anything your code isn't getting right.
Since it's a submit button then simulating the resulting HTTP Request would be easier than simulating a click. First, I would use a program like Fiddler to inspect what is being sent when you submit the form. Then I would replicate that request, just changing the values that I need changing, using HTTPWebRequest. You can find an example here.
The resultant HTTPWebResponse can then be parsed for data. Using something like HtmlAgilityPack makes that part easier.
You can do what you want with http://www.seleniumhq.org/projects/webdriver/. It is possible to do web automation with c# in a console program. I am using it for ui integration testing and it works fairly well
I would look into searching for a browser automation framework. I would usually do this in Python and have not used .Net for this, but a quick Google search yields quite a few results.
Included within these:
http://watin.org/
Web automation using .NET
Can we script and automate a browser, preferably with .Net?
I have a very simple app that sends an HttpWebRequest and gets a response. I need to know the exact request sent to the server. Is it possible?
Something like this:
POST /path/script.cgi HTTP/1.0
From: frog#jmarshall.com
User-Agent: HTTPTool/1.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
Build a basic web server with the System.Net.Sockets.TcpListener. The example shows how to do this. Then, point your HttpWebRequest to that server and see the results.
I have a web application (which I have no control over) I need to send HTTP post programatically to. Currently I've using HttpWebRequest like
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("https://someserver.com/blah/blah.aspx");
However the application was returning a "Unknown Server Error (not the IIS error, a custom application error page)" when posting to data. Using Fiddler to compare my Post vs IE post I can see the only difference is in the POST line of the request:
In Internet Explorer Fiddler (RAW view) shows traffic
POST /blah/blah.aspx HTTP/1.1
In my C# program fiddler (RAW view) records traffic as
POST https://someserver.com/blah/blah.aspx HTTP/1.1
This is only difference from both both requests.
From what I've researched so far it seems there is no way to make HttpWebRequest.Create post the relative URL.Note: I see many posts on "how to use relative URLs" but these suggestions do not work, as the actual post is still done using an absolute URL (when you sniff the HTTP traffic)
What is simplest way to accomplish this post with relative URL?
(Traffic is NOT going through a proxy)
Update: For the time being I'm using IE automation to do scheduled perf test, instead of method above. I might look at another scripting language as I did want to test without any browser.
No, you can't do POST without server in a Url.
One possible reason your program fails is if it does not use correct proxy and as result can't resolve server name.
Note: Fiddler shows path and host separately in the view you are talking about.
Configure you program to use Fiddler as proxy (127.0.0.1:8888) and compare requests that you are making with browser's ones. Don't forget to switch Fiddler to "show all proceses".
Here is article on configuring Fiddler for different type of environment including C# code: Fiddler: Configuring clients
objRequest = (HttpWebRequest)WebRequest.Create(url);
objRequest.Proxy= new WebProxy("127.0.0.1", 8888);
I have a Silverlight (v3) application that uses WebRequest to make an HTTP POST request to a webpage on the same website as the Silverlight app. This HTTP request gets back a 302 (a redirect) to another page on the same website, which HttpWebRequest is automatically supposed to follow (according to the documentation).
There's nothing particularly special about the code that makes the request (it uses the browser's HTTP stack, it is not configured to use the alternate inbuilt Silverlight HTTP stack):
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format("{0}?name={1}&size={2}", _UploadUrl, Uri.EscapeUriString(Name), TotalBytes));
request.Method = "POST";
All this works fine in Firefox and Chrome; Silverlight makes the POST HTTP request, receives a 302 response and automatically does a GET HTTP request of the specified redirect URL and returns that to me (I know this because I used Fiddler to watch the HTTP requests going on).
However, in Internet Explorer (v8), Silverlight does the POST HTTP request and then throws a WebException with a 404 error code!
Using Fiddler, I can see that Silverlight/Internet Explorer was successfully returned the 302 status code for the request, and I assume that the 404 status code (and associated WebException) that I get in Silverlight is because as far as I know HTTP requests that are done via the browser stack can only return 200 or 404 due to limitations. The real question is why does Internet Explorer not follow through the redirect like the other browsers?
Thanks in advance for any help!
EDIT: I would prefer not to use the Silverlight client HTTP stack because to my knowledge requests issued by it do not include cookies that are a part of the browser's session, critically including the ASP.NET authentication cookie that I need to be attached to the HTTP requests being made by the Silverlight control.
EDIT 2: I have discovered that Internet Explorer only exhibits this behaviour when you do a POST request. A GET request redirects successfully. This seems like pretty bad behaviour considering how many websites now do things in the Post-Redirect-Get style.
IE is closer to the specification, in that in responding to a 302 for a POST the user agent should send a POST (though it should not do so without user confirmation).
On the other hand, FF and Chrome are deliberately wrong, in copying ways in which user agents were frequently wrong some considerable time ago (the problem started in the early days of HTTP).
For this reason, 307 was introduced in HTTP/1.1 to be clearer that the same HTTP method should be used (i.e. in this case, it should be a POST) while 303 has always meant that one should use GET.
Therefore, instead of doing Response.Redirect which results in a 302 - that different user agents will handle in different ways, send a 303. The following code does so (and includes a valid entity body just to be within the letter of the spec). There is an overload so you can call it with either a Uri or a string:
private void SeeOther(Uri uri)
{
if(!uri.IsAbsoluteUri)
uri = new Uri(Request.Url, uri);
Response.StatusCode = 303;
Response.AddHeader("Location", uri.AbsoluteUri);
Response.ContentType = "text/uri-list";
Response.Write(uri.AbsoluteUri);
Context.ApplicationInstance.CompleteRequest();
}
private void SeeOther(string relUri)
{
SeeOther(new Uri(Request.Url, relUri));
}
I believe this was a feature change in Internet Explorer 7, where by they changed the expected 200 response to a 302 telling IE to be redirected. There is no smooth solution to this problem that I know off. A similar question was posed a while back here.
Change in behavior with Internet Explorer 7 and later in regard to CONNECT requests