504 error when requesting file from url in codebehind - c#

I'm trying to read a file, using a URL, however I keep getting a 504 Gateway Timeout.
The user submits an form, and I need to grab some information from a rather large xml file (45mb), using an XmlTextReader. However each time the request is made, it comes back with a 504 Gateway Timeout on one server, however it works fine on another server. The 504 error is thrown after about 20 seconds, however on the other server, where it does work, the file is read much faster than that.
XmlTextReader reader = new XmlTextReader(localUrl);
The strange issue is that IIS is not even logging this request. I've gone through the logs and I can find the entry in the system that works, however in the system that doesn't work, there is no request in the IIS logs. Making it look like its not even hitting IIS.

It seems the problem is that the user the AppPool is running under had its proxy settings set up incorrectly, therefore it was unable to make the call that it needed to make.
Once I corrected the proxy settings for that user, it started to work.

Related

Receiving different responses depending on client

I have two ASP.NET MVC web apps running on the same server. One of them is a web service that returns an error message in plain text if an exception occurs. However, right now, some clients that call the web service don't receive the error message; instead, they simply receive "Bad Request" in HTML.
The second web app (on the same server as the first) can call a URL handled by the first one and, right now, correctly receives the error message in plain text. However, I have tried calling that URL other ways, and all of them have resulted in receiving "Bad Request":
Pasting the URL into Chrome on my computer
Pasting the URL into IE on the server
Calling the URL from a web app on a different computer from the server
This error does not occur locally. When I run the 2 web apps on my computer, I receive the error message in plain text from both the second web app and from calling the local URL from Chrome.
I have narrowed down the offending line of code to the first line of the following ActionResult snippet:
Response.StatusCode = (int)HttpStatusCode.BadRequest;
return Content(errorMessage, ContentTypes.PlainText);
Removing the first line appears to fix the problem; however, that also eliminates the ability for me to use a descriptive status code. It appears to me that after the ActionResult is returned the response is being intercepted if either (a) the client is on a different computer or (b) the client is a web browser. So I guess I have a 2-part question:
Is there a reason why .NET or IIS would intercept and change a response depending on the client type or location?
Is there an easy way to view the response at any point between this code and when it's dispatched to the client?
Thanks!
Update: I changed the web app to use HttpResponseException. Now I am getting the following YSOD exception:
Processing of the HTTP request resulted in an exception. Please see
the HTTP response returned by the 'Response' property of this
exception for details.
Using MVC version 5, Visual Studio 2013. The code for the ActionResult looks like this:
MyImage image = new MyImage(parameters);
if (image.Errors.Any())
{
throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.BadRequest) { Content = new StringContent(image.Error) });
}
return File(image.AsJpeg(), ContentTypes.Jpeg);
Anyone have an idea how to bypass this unhelpful response?
Update 2: The issue turned out to be that the error message was being suppressed because of the Web.config setting system.webServer > httpErrors > errorMode which has a default value of "DetailedLocalOnly" and seems to be invoked in some cases for a reason I don't know (although this question may start to shed some light). Once I changed it to this, it worked as I expected:
<httpErrors errorMode="Detailed" />
I understand why they suppress error messages by default on remote machines, but this was a lot harder to track down than I would have thought. Anyway, I hope this is helpful to someone in the future.
I can't think of any reason why IIS would care what client was calling a service. My guess is that the client is sending a different request to the server than what you think it is sending. You can verify this by using a program called "Fiddler".
Also, I'd recommend following a pattern that returns a HttpResponseMessage like this when sending back information from a Web API call:
return new HttpResponseMessage(HttpStatusCode.BadRequest)
{
ReasonPhrase = message,
Content = new StringContent(string.Format("{0}", exception))
};

FiddlerCore behavior when network is disconnected

I'm handling local requests by using FiddlerCore like this:
private static void FiddlerApplication_BeforeRequest(Session session)
{
if (session.hostname.ToLower() == "localhost")
ProcessRequest(session);
}
Everything works well but when the Internet network is down, I'm getting the following message:
"[Fiddler] DNS Lookup for "www.google.com" failed. The system reports that no network connection is available. No such host is known"
My question is:
How should I configure FiddlerCore so when the network is down, I will receive the regular 404 page?
You're noting: When a proxy is present, the browser doesn't show its default error pages. That is a correct statement, and it's not unique to Fiddler.
You're confusing folks because you're talking about "regular 404 response"s. That's confusing because the page you're talking about has nothing to do with a HTTP/404-- it's the browser's DNS Lookup Failure or Server Unreachable error page, which it shows when a DNS lookup fails or a TCP/IP connection attempt fails. Neither one of those is a 404, which is an error code that can be returned by a server only after the DNS lookup succeeds and the TCP/IP connection is successful.
To your question of: How can I make a request through a proxy result in the same error page that would be shown if the proxy weren't present, the answer is that, in general, you can't. You could do something goofy like copying the HTML out of the browser's error page and having the proxy return that, but because each browser (and version) may use different error pages, your ruse would be easily detectable.
One thing you could try is to make a given site bypass the proxy (such that the proxy is only used for the hosts you care about). To do that, you'd create a Proxy Autoconfiguration script file (a PAC file) and have its FindProxyForURL function return DIRECT for everything except the site(s) you want to have go through the proxy. see the Configuration Script section of this post.
But, stepping back a bit, do you even need a proxy at all? Can't you just run your web server on localhost? When you startup Fiddler/FiddlerCore, it listens on localhost:[port]. Just direct your HTTP request there without setting Fiddler as the system proxy.

Facebook Graph API 403 Forbidden Error

This is similar to some questions on here, but none have seemed to produce an answer that has helped me. I'm calling the graph api from a c#/.Net application to get photos for a particular album, and I'm receiving a 403 error...sometimes.
I've never received the error in my development environment, only in production. I'm also caching the responses for an hour, so the most the application would hit the API in a given hour would be around 20 times, and not all at once. I'm currently swallowing the exception when it errors out and simply not showing the images, but that isn't a long-term solution.
var request = WebRequest.Create("https://graph.facebook.com/ALBUM_ID/photos");
var stream = request.GetResponse().GetResponseStream();
This just started happening about a month ago but I didn't see anything in the breaking changes list that would suggest this behavior. Any insight would be appreciated.
Update
This was hidden away in the response stream.
{"error":{"message":"(#4) Application request limit
reached","type":"OAuthException","code":4}}
I don't see for the life of me how I could be hitting a limit considering I'm only hitting the api a few times.
if you make a GET request to one of FB graph API endpoints that does not require access_token that does not mean you should not include it in request parameter. If you do as FB documentation says as do not include access_token then in FB server side it registers into your server machine. So limit (whatever amount is it exactly) can be reached very easily. If you however, put the user access token into the request (&access_token=XXXXXX) then requests register into the specific user, so the limit hardly ever be reached. You can test it with a simple script that makes 1000 requests with and without user access_token.
NOTE, FB app access token will not be sufficient as you will face the same problem: requests will be registered into app access_token that situation is alike making requests without access_token.

invalid_grant Returned using Service Account and Google Drive API

I've spent 2 days messing around with various Drive API tutorials using a Service Account.
The most recent tutorial i used was this one: https://developers.google.com/drive/delegation
I keep getting this error when trying to upload a file:
ProtocolException was unhandled
Error occurred while sending a direct message or getting the response.
I installed Fiddler and determined that when POST /o/oauth2/token was returning:
{
"error:"invalid_grant"
}
I have already triple+ checked the scope of my application.
What am I doing wrong?
Turns out the time on my server was 5 min fast.
When I corrected the time on the server everything worked.
I believe this was causing some kind of authentication issue because the Google Server time didn't accept the timestamp of the request coming from my server or something along those lines...
(Hope this saves someone some head banging)

HTTPHandler does not handle secondary requests

I want to run my personal web sites via an httphandler (I have a web server and static ip at home.)
Eventually, I will incorporate a data access layer and domain router into the handler, but for now, I am just trying to use it to return static web content.
I have the handler mapped to all verbs and paths with no access restrictions in IIS 7 on Windows 7.
I have added a little file logging at the beginning of process request. As it is the first thing in the handler, I use the logging to tell me when the handler is hit.
At the moment, the handler just returns a single web page that I have already written.
The handler itself is mostly just this:
using (FileStream fs = new FileStream(Request.PhysicalApplicationPath + "index.htm",
FileMode.Open))
{
fs.CopyTo(Response.OutputStream);
}
I understand that this won't work for anything but the one file.
So my issue is this: the HTML file has links to some images in it. I would expect that the browser would come back to the server to get those images as new requests. I would expect those requests to fail (because they'd be mapped to index.htm). But I would expect to see the logging hit at least twice (and potentially hit recursively). However, I only see a single request. The web page comes up and the images are 'X's.
When I refresh the browser, I see another request come through, but only for the root page again. The page is basic HTML, I do not have an asp.net application (nor do I want one, I like HTML/CSS/JS).
What do I have to do to get more than just the first request sent from the browser? I assume I'm just totally off the mark because I wrote an HTTP Module first, but strangely got the same exact behavior. I'm thinking I need to specify some response headers, but don't see that in any example.

Categories

Resources