So I can call my IPN handling page (an aspx page) happily using Fiddler to POST a fairly accurate version of what the IPN should be sending, and it works fine. However, as soon as I use the IPN test tool or try a 'real' transaction it throws a 405:
2012-01-25 18:46:55 193.128.120.227 POST /paypal_notify.aspx - 80 - 173.0.82.126 - 302 0 0
2012-01-25 18:46:55 193.128.120.227 POST /403_error.htm - 80 - 173.0.82.126 - 405 0 1
I just can't figure it out. Calling an ASP page from IPN works fine but ASPX and ASHX both throw 405s. And yet POSTing to the page myself isn't a problem.
If anyone's got any ideas what might cause this I'd be really grateful!
Well, you first do a redirect to an error page. Paypal tries a POST to /403_error.htm and since that should probably be a GET you get the 405. Something is probably wrong in your paypal_notify.aspx in the first place.
403 means forbidden, so do you have any security scheme that throws the 403?
You might want to post the code you use in paypal_notify.aspx so we can figure out what is causing the 302 to the error page for 403.
Related
We have some resources which contains links to external sites. However, we want to avoid dead links and have implemented a ping routine written in c# .net6.
We loop through all links and do a HEAD and a GET request with HttpClient. Most sites return OK200 but some return bad request, forbidden and so forth. But if we inspect the link in the browser, the site/link works as expected.
If we get a 404 we mark the link as dead and someone should do something manually and update the link. We have added a useragent to httpclient.
How can we avoid the bad requests returned to the httpclient?
Wen m trying to fire an api through the fiddler, its showing 404. On the other hand, the site is loading properly.
Your sites loads successfully, cause your 2nd request shows a 200 for localhost (and I bet this is a GET request). After the site is loaded, it seems to contain some javascript code that tries to connect to some login site (with POST to aaweb.authorassists.com) and that fails.
We had a little mishap where an https binding was created for a website without a hostname (just an ip), and then another website was created with only an http binding to a hostname using the same ip as the first site.
So the problem is when you navigate to the 2nd site over https, instead of getting an error it just goes to the first website. As a result Google was able to access the first site through the 2nd sites host name over https and now we have lots of duplicate links out in google land.
I've already stopped the bleeding, but now I need to 301 all the bad links that were created by Google for the 2nd site. My plan is that, going forward, anytime a 404 error is encountered in the 2nd site then it will call for just the header from the same link on the 1st site. If the header returns with an OK status then it will do a permanent redirect to the 1st site.
There's just one part of that plan I don't know how to do off the top of my head... what's the best way to intercept the 404s in such a way I can run my code to determine whether it should be 301'd or not?
I have a Silverlight (v3) application that uses WebRequest to make an HTTP POST request to a webpage on the same website as the Silverlight app. This HTTP request gets back a 302 (a redirect) to another page on the same website, which HttpWebRequest is automatically supposed to follow (according to the documentation).
There's nothing particularly special about the code that makes the request (it uses the browser's HTTP stack, it is not configured to use the alternate inbuilt Silverlight HTTP stack):
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format("{0}?name={1}&size={2}", _UploadUrl, Uri.EscapeUriString(Name), TotalBytes));
request.Method = "POST";
All this works fine in Firefox and Chrome; Silverlight makes the POST HTTP request, receives a 302 response and automatically does a GET HTTP request of the specified redirect URL and returns that to me (I know this because I used Fiddler to watch the HTTP requests going on).
However, in Internet Explorer (v8), Silverlight does the POST HTTP request and then throws a WebException with a 404 error code!
Using Fiddler, I can see that Silverlight/Internet Explorer was successfully returned the 302 status code for the request, and I assume that the 404 status code (and associated WebException) that I get in Silverlight is because as far as I know HTTP requests that are done via the browser stack can only return 200 or 404 due to limitations. The real question is why does Internet Explorer not follow through the redirect like the other browsers?
Thanks in advance for any help!
EDIT: I would prefer not to use the Silverlight client HTTP stack because to my knowledge requests issued by it do not include cookies that are a part of the browser's session, critically including the ASP.NET authentication cookie that I need to be attached to the HTTP requests being made by the Silverlight control.
EDIT 2: I have discovered that Internet Explorer only exhibits this behaviour when you do a POST request. A GET request redirects successfully. This seems like pretty bad behaviour considering how many websites now do things in the Post-Redirect-Get style.
IE is closer to the specification, in that in responding to a 302 for a POST the user agent should send a POST (though it should not do so without user confirmation).
On the other hand, FF and Chrome are deliberately wrong, in copying ways in which user agents were frequently wrong some considerable time ago (the problem started in the early days of HTTP).
For this reason, 307 was introduced in HTTP/1.1 to be clearer that the same HTTP method should be used (i.e. in this case, it should be a POST) while 303 has always meant that one should use GET.
Therefore, instead of doing Response.Redirect which results in a 302 - that different user agents will handle in different ways, send a 303. The following code does so (and includes a valid entity body just to be within the letter of the spec). There is an overload so you can call it with either a Uri or a string:
private void SeeOther(Uri uri)
{
if(!uri.IsAbsoluteUri)
uri = new Uri(Request.Url, uri);
Response.StatusCode = 303;
Response.AddHeader("Location", uri.AbsoluteUri);
Response.ContentType = "text/uri-list";
Response.Write(uri.AbsoluteUri);
Context.ApplicationInstance.CompleteRequest();
}
private void SeeOther(string relUri)
{
SeeOther(new Uri(Request.Url, relUri));
}
I believe this was a feature change in Internet Explorer 7, where by they changed the expected 200 response to a 302 telling IE to be redirected. There is no smooth solution to this problem that I know off. A similar question was posed a while back here.
Change in behavior with Internet Explorer 7 and later in regard to CONNECT requests
I have 2 questions on this
My code always seems to hit a 401 forbidden error when I try to post data to a http link
What is the best way to pull back and display xml data from the stream that I should be getting back?
My guess regarding your first question: Your "401 Forbidden" is actually a "401 Unauthorized" ("Forbidden" would be a fatal error, and it has code 403). This 401 response is a normal part of the NTLM (Windows-integrated) challenge/response authentication mechanism. Your request must have correct credentials attached so it can authorize itself, then this error will go away.
Regarding your second question — It depends. What XML do you get back? Will displaying the raw XML string be useful?