I'm writing code to send a request to my web service in windows azure, and it turned out I cannot get response until timeout. What's confusing is that i'm sure everything inside my service has been done, and yet the response did not come back.
My service code looks like this:
public virtual JsonResult ServiceMethod()
{
// 1. do the work
// 2. write something to database
return Json(result, JsonRequestBehavior.AllowGet);
}
And my client code looks like this:
HttpWebRequest webRequest = WebRequest.Create(new Uri(httpAddress)) as HttpWebRequest;
webRequest.Timeout = 1000 * 1000;
webRequest.ServicePoint.ConnectionLeaseTimeout = 40 * 60 * 1000;
webRequest.ServicePoint.MaxIdleTime = 40 * 60 * 1000;
webRequest.ServicePoint.SetTcpKeepAlive(true, 50 * 1000, 1000);
webRequest.Method = "GET";
using (HttpWebResponse response = webRequest.GetResponse() as HttpWebResponse)
{
// handle the response
}
Now I'm pretty sure my code in my service has completed because i tried to write something to database and that happened. I also checked the iis log on the virtual machine where the service is hosted and it showed http 200 is returned. But the call GetResponse() in my client code hanged until the 1000 seconds timeout is reached.
Update
There is a parameter to my web API which will affect how long the method runs.(I didn't show it in the above code for simplicity). If the service method runs for sufficiently long time (like 6 or 7 minutes), then the hanging problem will happen. Otherwise, the web response can successfully return. So i guess there might be some problem within the timeout setting. But there are several timeout properties in the HttpWebRequest and its base classes, i don't know what combination of them can result in or solve this problem.
Any ideas of what can the problem be?
Thanks a lot.
From your code, and since you're returning JsonResult from your action, it seems that you have a RESTful architecture. One of the ways to debug RESTful architectures is to use browser, specifically in your case where HTTP method is GET.
You can simply copy/paste your URL in the address bar of a browser to see if you get any response. If you see the result, then your client code is the problem, otherwise, the problem is in the server side (azure server).
To monitor HTTP traffic, a good utility is Fiddler.
Run your project locally in the Windows Azure emulator. Then, check the output of the compute emulator. It should show unhandled exceptions there.
It seems that the call to your external service fails, and the exception is not handled properly. We've had a similar case before, and it would even cause the IIS worker process to crash.
Related
I am stuck with this, I am calling a simple report server URL which returns report's PDF, but strangely the WebRequest.GetResponse method doesn't return anything, when I say this, I mean the code just stop executing at that point, no exception, no error, no status code, no event viewer log on server, nothing!! And so I am not able to debug it
This is my code
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
req.PreAuthenticate = true;
req.UseDefaultCredentials = true;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
req.ImpersonationLevel = TokenImpersonationLevel.Impersonation;
req.Timeout = int.MaxValue;
Log.Write("Log before");
var response = req.GetResponse();
Log.Write("Log after");
It just prints Log before log and then nothing is printed after that.
This code works perfectly fine when I run through visual studio and stops working when it is deployed in dev and test servers!
I am just expecting it to atleast through the exception or return unauthorized or any other status code, then I will be able to debug the issue.
Any suggestions what I can try to debug it?
Have you tried leaving it for 24 days 20 hours 31 minutes and 24 seconds? In other words, have you left it for as long as you have set the timeout to?
The server is not returning a response and the code is waiting int.MaxValue milliseconds to tell you that. The most likely cause of this is that there is a piece of networking infrastructure between your client and server that is stopping the request. This could be a firewall or proxy. It may also be being caused by the server not liking the request and refusing to respond.
Things I would try:
Try accessing the URL though a web browser on the machine that the code is failing on.
Set the timeout to something sensible like one minute and run the request to timeout.
Try pinging the remote server from the client machine.
Use a product like Fiddler to check what is actually being sent and received.
Have a chat with your network provider to see if they can help.
Check the server logs to see if the server has erred.
Change the first log to Log.Write("Log before: " + url); to check what is actually being requested.
I have deployed an ASP .net MVC web app to Azure App service.
I do a GET request from my site to some controller method which gets data from DB(DbContext). Sometimes the process of getting data from DB may take more than 4 minutes. That means that my request has no action more than 4 minutes. After that Azure kills the connection - I get message:
500 - The request timed out. The web server failed
to respond within the specified time.
This is a method example:
[HttpGet]
public async Task<JsonResult> LongGet(string testString)
{
var task = Task.Delay(360000);
await task;
return Json("Woke", JsonRequestBehavior.AllowGet);
}
I have seen a lot of questions like this, but I got no answer:
Not working 1
Cant give other link - reputation is too low.
I have read this article - its about Azure Load Balancer which is not available for webapps, but its written that common way of handling my problem in Azure webapp is using TCP Keep-alive. So I changed my method:
[HttpGet]
public async Task<JsonResult> LongPost(string testString)
{
ServicePointManager.SetTcpKeepAlive(true, 1000, 5000);
ServicePointManager.MaxServicePointIdleTime = 400000;
ServicePointManager.FindServicePoint(Request.Url).MaxIdleTime = 4000000;
var task = Task.Delay(360000);
await task;
return Json("Woke", JsonRequestBehavior.AllowGet);
}
But still get same error.
I am using simple GET request like
GET /Home/LongPost?testString="abc" HTTP/1.1
Host: longgetrequest.azurewebsites.net
Cache-Control: no-cache
Postman-Token: bde0d996-8cf3-2b3f-20cd-d704016b29c6
So I am looking for the answer what am I doing wrong and how to increase request timeout time in Azure Web app. Any help is appreciated.
Azure setting on portal:
Web sockets - On
Always On - On
App settings:
SCM_COMMAND_IDLE_TIMEOUT = 3600
WEBSITE_NODE_DEFAULT_VERSION = 4.2.3
230 seconds. That's it. That's the in-flight request timeout in Azure App Service. It's hardcoded in the platform so TCP keep-alives or not you're still bound by it.
Source -- see David Ebbo's answer here:
https://social.msdn.microsoft.com/Forums/en-US/17305ddc-07b2-436c-881b-286d1744c98f/503-errors-with-large-pdf-file?forum=windowsazurewebsitespreview
There is a 230 second (i.e. a little less than 4 mins) timeout for requests that are not sending any data back. After that, the client gets the 500 you saw, even though in reality the request is allowed to continue server side.
Without knowing more about your application it's difficult to suggest a different approach. However what's clear is that you do need a different approach --
Maybe return a 202 Accepted instead with a Location header to poll for the result later?
I just changed my Azure Web Site from Shared Enviroment to Standard, and it works.
Using HttpWebRequest, I'm trying to query a secured (negotiate) url behind a load balancing setup in round-robin mode (two IIS 7.5 servers). Seems simple enough, but I have some problems.
The first anonymous request goes on one server and the negotiate part goes on the other. The problem is that it takes about six seconds between these two requests, so it is way too long. Trying to diagnose the delay, I realized that, going through Fiddler's proxy, all the requests went on the same server, so it took less than one second total. If I disable Fiddlers option "reuse server connections", then my requests have the same behavior as without Fiddler and it takes forever.
Googling this, I ended up on this link: http://fiddler2.com/blog/blog/2013/02/28/help!-running-fiddler-fixes-my-app-
I know that Fiddler is using sockets and its own connection pool, but is there a way to reproduce the same behavior using .NET HttpWebRequest so that my requests (anonymous and negotiate) will reuse connections and end up on the same server?
Here is a quick test that takes about 70 seconds to complete without Fiddler, and about 2 seconds going through Fiddler...
Also, please note that it isn't a Proxy detection delay and that sticky session are disabled on the nlb.
public void Main(string[] args)
{
int i = 0;
while (i < 10)
{
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create("http://nlb/service.asmx");
HttpWebResponse response;
wr.KeepAlive = true;
wr.UseDefaultCredentials = true;
response = (HttpWebResponse)wr.GetResponse();
using (StreamReader sr = new StreamReader(response.GetResponseStream()))
{
Console.WriteLine(sr.ReadToEnd());
}
response.Close();
i++;
}
}
This is another proof that Fiddler is plain awesome!
Thanks for any advice.
Just a shot here, and maybe it seems too easy -
But the last line of your code is Response.Close(). The documentation prior to .NET 4.5 doesn't say much about this other than it "closes the existing socket connection."
In .NET 4.5 however, this is the documentation:
This method terminates the connection to the client in an abrupt
manner and is not intended for normal HTTP request processing.
http://msdn.microsoft.com/en-us/library/system.web.httpresponse.close(v=vs.110).aspx
I'll admit that I don't know some of the subtle differences between .NET 4.5 and the prior versions of HttpResponse; however, I do think that logically, Connection.Close() is not compatible with Keep-Alive; and you could be seeing the behavior of Fiddler intervening (maybe as a bug) to patch over this. Just a theory- needs testing.
I have a strange situation in my .NET CF 3.5 Windows Mobile 6.5 application.
I have 2 threads.
In 1st thread I do following:
try
{
String url = "http://myserverurl";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
_currentRequest = request;
request.Timeout = 10000;
response = (HttpWebResponse)request.GetResponse();
ConnectionStatus connStatus = response.StatusCode == HttpStatusCode.OK;
response.Close();
}
catch (Exception e)
{
//log e
}
finally
{
}
In 2n thread a call a WebService through a SoapHttpClientProtocol based class generated by WebService reference.
soapClient.Url = "http://myserverurl";
soapClient.MethodOnWebService();
In both cases the url is the same. The 1st thread is used for connection checking purpose. It does the WebRequest periodically to check whetrher the server is available and displays the connection status (not shown in code). 2nd thread calls WebService on the same server (url). I observed, that when one thread is executing a WebRequest the 2nd one gets blocked or event timeouted while executing a Web Method. They both look as interfering each other. Why? I wonder if windows mobile network stack simply creates only one socket connection for both threads if it realizes that both goes to the same target IP:port? What about sessions? On desktop Windows I would expect 2 sessions being created and at least 2 sockets on client machine.
Does anybody have any hints on how Windows Mobile (or .NET CF) manages connections and socket reusage?
Regards
I would guess that there is a third session somewhere. What you're seeing is most likely due to a little-known (until it bites you, like now) recommended connection-limitation in the HTTP-protocol. Section 8.1.4 of RFC2068 says "A single-user client SHOULD maintain AT MOST 2 connections with any server or proxy". I've experienced the same limitation myself, most recently on Windows Phone 7.
The limit lies in the WebRequest and the solution is to increase the limit:
// set connection limit to 5
ServicePointManger.DefaultConnectionLimit = 5;
See e.g. this old blog entry from David Kline.
I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though).
I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem.
Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do.
Here's some basic code to verify what I'm doing (just in case I'm missing closing something):
WebRequest webRequest = WebRequest.Create(uri);
webRequest.Method = "GET";
webRequest.Credentials = new NetworkCredential(username, password);
WebResponse webResponse = webRequest.GetResponse();
try
{
using(Stream stream = webResponse.GetResponseStream())
{
// read the stream
}
}
finally
{
webResponse.Close()
}
What kind of application is this? You mentioned that the server is running out of ports, but then you mentioned HttpWebRequest. Are you running this code in a webservice or ASP.NET page, which is trying to then download multiple files for the same incoming request from the client?
What kind of authentication is the page using? If it is using NTLM authentication, then the connections cannot be shared if the credentials being used are different for each request.
What I would suggest is to group your request per credential. So, for eg, all requests using username "John" would be grouped. You can specify the "ConnectionGroupName" property on the service point, so the system will try to reuse connections for the same credential and server.
If that also doesnt work, you will need to do one or more of the following:
1) Throttle your requests.
2) Increase the wildcard port range.
3) Use the BindIPConnectionCallback on ServicePoint to make it bind to a non-wildcard port (i.e a port in the range 1024-16384)
More digging seems to point to it possibly being due to authentication and the UnsafeAuthenticatedConnectionSharing property might alleviate this. However, I'm not sure that's the best thing, either.