The first request or a request after idling roughly 100 seconds is very slow and takes 15-30 seconds. Any request without idling takes less than a second. I am fine with the first request taking time, just not the small idle time causing the slowdown.
The slowdown is not unique to the client, if I keep making requests on one client then it stays quick on the other. Only when all are idle for 100 seconds does it slowdown.
Here are some changes that I have tried:
Setting HttpClient to a singleton and not disposing it using a using() block
Setting ServicePointManager.MaxServicePointIdleTime to a higher value since by default it is 100 seconds. Since the time period is the same as mine I thought this was the issue but it did not solve it.
Setting a higher ServicePointManager.DefaultConnectionLimit
Default proxy settings set via web.config
using await instead of httpClient.SendAsync(request).Result
It is not related to IIS application pool recycling since the default there is set to 20mn and the rest of the application remains quick.
The requests are to a web service which communicates with AWS S3 to get files. I am at a loss for ideas at this point and all my research has led me to the above points that I already tried. Any ideas would be appreciated!
Here is the method:
`
//get httpclient singleton or create
var httpClient = HttpClientProvider.FileServiceHttpClient;
var queryString = string.Format("?key={0}", key);
var request = new HttpRequestMessage(HttpMethod.Get, queryString);
var response = httpClient.SendAsync(request).Result;
if (response.IsSuccessStatusCode)
{
var metadata = new Dictionary<string, string>();
foreach (var header in response.Headers)
{
//grab tf headers
if (header.Key.StartsWith(_metadataHeaderPrefix))
{
metadata.Add(header.Key.Substring(_metadataHeaderPrefix.Length), header.Value.First());
}
}
var virtualFile = new VirtualFile
{
QualifiedPath = key,
FileStream = response.Content.ReadAsStreamAsync().Result,
Metadata = metadata
};
return virtualFile;
}
return null;
The default idle timeout is about 1-2 mins. After that, the client has to re handshake with the server. So, you will find that it will be slow after 100s.
You could use socket handler to extend the idle timeout.
var socketsHandler = new SocketsHttpHandler
{
PooledConnectionIdleTimeout = TimeSpan.FromHours(27),//Actually 5 mins can be idle at maximum. Note that timeouts longer than the TCP timeout may be ignored if no keep-alive TCP message is set at the transport level.
MaxConnectionsPerServer = 10
};
client = new HttpClient(socketsHandler);
As you can see, although I set the idle timeout to 27 hours, but actually it just keep 5 mins alive.
So, finally I just call the target endpoint using the same HttpClient every 1 min. In this case, there is always an established connection. You could use netstat to check that. It works fine.
Related
my application is using Azure Service Bus to store messages. I have an Azure function called HttpTriggerEnqueuewhich allow me to enqueue messages. The problem is that this function can be invoked hundreds times in a little interval of time. When I call the HttpTriggerEnqueue once, twice, 10 times, or 50 times everything works correctly. But when I call it 200, 300 times (which is my use case) I get an error and not all messages are enqueued. From the functions portal I get the following error.
threshold exceeded [connections]
I tried both the .NET sdk and the HTTP request. Here is my code
HTTP REQUEST:
try
{
var ENQUEUE = "https://<MyNamespace>.servicebus.windows.net/<MyEntityPath>/messages";
var client = new HttpClient(new HttpClientHandler() { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip });
var request = new HttpRequestMessage(HttpMethod.Post, ENQUEUE);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var sasToken = SASTokenGenerator.GetSASToken(
"https://<MyNamespace>.servicebus.windows.net/<MyEntityPath>/",
"<MyKeyName>",
"<MyPrimaryKey>",
TimeSpan.FromDays(1)
);
client.DefaultRequestHeaders.TryAddWithoutValidation("Authorization", sasToken);
request.Content = new StringContent(message, Encoding.UTF8, "application/json");
request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("deflate"));
var res = await client.SendAsync(request);
}
catch (Exception e) { }
And the code using the SDK:
var qClient = QueueClient.CreateFromConnectionString(MyConnectionString, MyQueueName);
var bMessage = new BrokeredMessage(message);
qClient.Send(bMessage);
qClient.Close();
I have the standard tier pricing on Azure.
If I call the function 300 (for example) times in a little interval of time I get the error. How can I solve?
The actual issue here isn't with the Service Bus binding (although you should follow the advice that #Mikhail gave for that), it's a well known issue with HttpClient. You shouldn't be re-creating the HttpClient on every function invocation. Store it in a static field so that it can be reused. Read this blog post for a great breakdown on the actual issue. The main point being that unless you refactor this to use a single instance of HttpClient you're going to continue to run into port exhaustion.
From the MSDN Docs:
HttpClient is intended to be instantiated once and re-used throughout the life of an application. Especially in server applications, creating a new HttpClient instance for every request will exhaust the number of sockets available under heavy loads. This will result in SocketException errors.
You should use Service Bus output binding to send messages to Service Bus from Azure Function. It will handle connection management for you, so you shouldn't be getting such errors.
I am currently running a script that is hitting an api roughly 3000 times over an extended period of time. I am using parallel.foreach() so that I can send a few requests at a time to speed up the process. I am creating two requests in each foreach iteration so I should have no more than 10 requests. My question is that I keep receiving 429 errors from the server and I spoke to someone who manages the server and they said they are seeing requests in bursts of 40+. With my current understanding of my code, I don't believe this is even possible, Can someone let me know If I am missing something here?
public static List<Requests> GetData(List<Requests> requests)
{
ParallelOptions para = new ParallelOptions();
para.MaxDegreeOfParallelism = 5;
Parallel.ForEach(requests, para, request =>
{
WeatherAPI.GetResponseForDay(request);
});
return requests;
}
public static Request GetResponseForDay(Request request)
{
var request = WebRequest.Create(request);
request.Timeout = 3600000;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader myStreamReader = new
StreamReader(response.GetResponseStream());
string responseData = myStreamReader.ReadToEnd();
response.Close();
var request2 WebRequest.Create(requestthesecond);
HttpWebResponse response2 = (HttpWebResponse)request2.GetResponse();
StreamReader myStreamReader2 = new
StreamReader(response2.GetResponseStream());
string responseData2 = myStreamReader2.ReadToEnd();
response2.Close();
DoStuffWithData(responseData, responseData2)
return request;
}
as smartobelix pointed out, your degreeofparallelism set to 5 doesn't prevent you of sending too many requests as defined by servers side policy. all it does is prevents you from exhausting number of threads needed for making those requests on your side. so what you need to do is communicate to servers owner, get familiar with their limits and change your code to never meet the limits.
variables involved are:
number of concurrent requests you will send (parallelism)
average time one request takes
maximum requests per unit time allowed by server
so, for example, if your average request time is 200ms and you have max degree of parallelism of 5, then you can expect to send 25 requests per second on average. if it takes 500ms per request, then you'll send only 10.
mix servers allowed numbers into that and you'll get the idea how to fine tune your numbers.
Both answers above are essentially correct. The problem I had was that the server manager set a rate limit of 100 requests/min where there previously wasn't one and didn't inform me. As soon as I hit this limit of 100 requests/min, I started receiving 429 errors nearly instantly and started firing off requests at that speed which were also receiving 429 errors.
I have a distributed service that takes anywhere from 10 sec to 10 min to process a message. The service starts on a user's (== browser) request which is received through an API. Due to various limitations, the result of the service has to be returned outside of the initial request (timeouts, dropped client connections ...) Therefore I return a SessionId to the requesting user which can be used to retrieve the result until it expires.
Now it can happen that a user makes multiple consecutive requests for the result while the session is still locked. For example the following code gets hit with the same SessionId within 60 seconds:
var session = await responseQueue.AcceptMessageSessionAsync(sessionId);
var response = await session.ReceiveAsync(TimeSpan.FromSeconds(60));
if (response == null)
{
session.Abort();
return null;
}
await response.AbandonAsync();
What I need is a setup without locking and the ability to read a message multiple times until it expires plus the ability to wait for yet non-existent messages.
Which ServiceBus solution fits that bill?
UPDATE
Here's a dirty solution, still looking for a better way:
MessageSession session = null;
try
{
session = await responseQueue.AcceptMessageSessionAsync(sessionId);
}
catch
{
// ... and client has to try again until the 60 sec lock from other requests is released
}
var response = await session.ReceiveAsync(TimeSpan.FromSeconds(60));
// ...
I have a Windows Service that sends json data to a MVC5 WebAPI using the WebClient. Both the Windows Service and the WebClient are currently on the same machine.
After it has run for about 15 minutes with about 10 requests per second each post takes unreasonably long to complete. It can start out at about 3 ms to complete a request and build up to take about 5 seconds, which is way too much for my application.
This is the code I'm using:
private WebClient GetClient()
{
var webClient = new WebClient();
webClient.Headers.Add("Content-Type", "application/json");
return webClient;
}
public string Post<T>(string url, T data)
{
var sw = new Stopwatch();
try
{
var json = JsonConvert.SerializeObject(data);
sw.Start();
var result = GetClient().UploadString(GetAddress(url), json);
sw.Stop();
if (Log.IsVerboseEnabled())
Log.Verbose(String.Format("json: {0}, time(ms): {1}", json, sw.ElapsedMilliseconds));
return result;
}
catch (Exception)
{
sw.Stop();
Log.Debug(String.Format("Failed to send to webapi, time(ms): {0}", sw.ElapsedMilliseconds));
return "Failed to send to webapi";
}
}
The result of the request isn't really of importance to me.
The serialized data size varies from just a few bytes to about 1 kB but that does not seem to affect the time it takes to complete the request.
The api controllers that receive the request completes their execution almost instantly (0-1 ms).
From various questions here on SO and some blog posts I've seen people suggesting the use of HttpWebRequest instead to be able to control options of the request.
Using HttpWebRequest I've tried these things that did not work:
Setting the proxy to an empty proxy.
Setting the proxy to null
Setting the ServicePointManager.DefaultConnectionLimit to an arbitrary large number.
Disable KeepAlive (I don't want to but it was suggested).
Not opening the response stream at all (had some impact but not enough).
Why are the requests taking so long? Any help is greatly appreciated.
It turned out to be another part of the program that took all available connections. I.e. I was out of sockets and had to wait for a free one.
I found out by monitoring ASP.NET Applications\Requests/Sec in the Performance Monitor.
I want my http get request to fail if it takes more than 10 seconds by timint out.
I have this:
var request = (HttpWebRequest)WebRequest.Create(myUrl);
request.Method = "GET";
request.Timeout = 1000 * 10; // 10 seconds
HttpStatusCode httpStatusCode = HttpStatusCode.ServiceUnavailable;
using (var webResponse = (HttpWebResponse)request.GetResponse())
{
httpStatusCode = webResponse.StatusCode;
}
It doesn't seem to timeout when I put a bad URL in the request, it just keeps going and going for a long time (seems like minutes).
WHy is this?
If you are doing it in a web project, make sure the debug attribute of the system.web/compilation tag in the Web.Config file is set to "false".
If it is a console application or such, compile it in "Release" mode.
A lot of timeouts are ignored in "Debug" mode.
Your code is probably performing a DNS lookup on the bad URL which takes a minimum of 15 seconds.
According to the documentation for HttpWebRequest.Timeout
A Domain Name System (DNS) query may take up to 15 seconds to return
or time out. If your request contains a host name that requires
resolution and you set Timeout to a value less than 15 seconds, it may
take 15 seconds or more before a WebException is thrown to indicate a
timeout on your request.
You can perform a DNS Lookup using Dns.GetHostEntry but it looks like it will take 5 seconds by default.