my application is using Azure Service Bus to store messages. I have an Azure function called HttpTriggerEnqueuewhich allow me to enqueue messages. The problem is that this function can be invoked hundreds times in a little interval of time. When I call the HttpTriggerEnqueue once, twice, 10 times, or 50 times everything works correctly. But when I call it 200, 300 times (which is my use case) I get an error and not all messages are enqueued. From the functions portal I get the following error.
threshold exceeded [connections]
I tried both the .NET sdk and the HTTP request. Here is my code
HTTP REQUEST:
try
{
var ENQUEUE = "https://<MyNamespace>.servicebus.windows.net/<MyEntityPath>/messages";
var client = new HttpClient(new HttpClientHandler() { AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip });
var request = new HttpRequestMessage(HttpMethod.Post, ENQUEUE);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var sasToken = SASTokenGenerator.GetSASToken(
"https://<MyNamespace>.servicebus.windows.net/<MyEntityPath>/",
"<MyKeyName>",
"<MyPrimaryKey>",
TimeSpan.FromDays(1)
);
client.DefaultRequestHeaders.TryAddWithoutValidation("Authorization", sasToken);
request.Content = new StringContent(message, Encoding.UTF8, "application/json");
request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("deflate"));
var res = await client.SendAsync(request);
}
catch (Exception e) { }
And the code using the SDK:
var qClient = QueueClient.CreateFromConnectionString(MyConnectionString, MyQueueName);
var bMessage = new BrokeredMessage(message);
qClient.Send(bMessage);
qClient.Close();
I have the standard tier pricing on Azure.
If I call the function 300 (for example) times in a little interval of time I get the error. How can I solve?
The actual issue here isn't with the Service Bus binding (although you should follow the advice that #Mikhail gave for that), it's a well known issue with HttpClient. You shouldn't be re-creating the HttpClient on every function invocation. Store it in a static field so that it can be reused. Read this blog post for a great breakdown on the actual issue. The main point being that unless you refactor this to use a single instance of HttpClient you're going to continue to run into port exhaustion.
From the MSDN Docs:
HttpClient is intended to be instantiated once and re-used throughout the life of an application. Especially in server applications, creating a new HttpClient instance for every request will exhaust the number of sockets available under heavy loads. This will result in SocketException errors.
You should use Service Bus output binding to send messages to Service Bus from Azure Function. It will handle connection management for you, so you shouldn't be getting such errors.
Related
I have an Azure Servicebus function trigger which is supposed to call an external endpoint whenever an event is placed in its designated queue. However, when the function picks up the event and is going to make the outgoing request it fails with the following log message
Executed 'AzureServicebusTrigger' (Failed, Id=cb218eb5-300b-40c0-8a4e-81f977b9cd5c)
An attempt was made to access a socket in a way forbidden by its access permissions`
My function:
public static class AzureServicebusTrigger
{
private static HttpClient _httpClient = new HttpClient();
[FunctionName("AzureServicebusTrigger")]
public static async Task Run([ServiceBusTrigger("receiver", Connection = "ServiceBusConnectionString")]
string myQueueItem, ILogger log)
{
var request = await _httpClient.GetAsync("http://<ip>:7000");
var result = await request.Content.ReadAsStringAsync();
}
}
What is this permission?
And how do I enable such that outgoing requests from my function become possible?
I have an educated guess. I've seen this before with Web Apps. It can occur if you've exhausted the number of outbound connections available for your chosen SKU in your pricing tier.
I'm guessing you're on the Azure Function consumption plan. If so, you are limited to 600 active connections (see here). So if you're seeing this problem intermittently, try moving to the Premium tier or an App Service plan of Standard or higher.
I think Rob Reagan is correct. To help you accept his answer as correct I suggest the following.
Can you inspect how many function execution and servers (function host instances) did you have at the moment of the failure. You can do that by adding Application Insights and writing such a query.
performanceCounters
| where cloud_RoleName =~ 'YourFunctionName'
| where timestamp > ago(30d)
| distinct cloud_RoleInstance
Or by reproducing the load and watching live monitor.
I'm trying to create an application which will continously send many http requests to one web endpoint which uses NTLM authentification (via login and password). To optimize the app, it was decided to use multithread execution, so I can send many http requests simultaneously. I'm using following code:
private string DoGetRequestWithCredentials(Uri callUri, NetworkCredential credentials)
{
using (var handler = new HttpClientHandler {Credentials = credentials})
{
using (var client = new HttpClient(handler))
{
MediaTypeWithQualityHeaderValue mtqhv;
MediaTypeWithQualityHeaderValue.TryParse(
"application/json;odata=verbose", out mtqhv);//success
client.DefaultRequestHeaders.Accept.Add(mtqhv);
client.Timeout = RequestTimeout.Value;
var result = client.GetAsync(callUri).Result;
result.EnsureSuccessStatusCode();
return result.Content.ReadAsStringAsync().Result;
}
}
}
Code works fine in a single thread, but when I enable multithread, I'm starting to receive 401 UNAUTHORIZED exceptions.
My suggestion why this happens is because some of requests in multithread are trying to be executed between consiquential NTLM calls.
Is my assumption correct?
How can I avoid this situations without locking the method? Because I really want requests to be sent simultaneously
The first request or a request after idling roughly 100 seconds is very slow and takes 15-30 seconds. Any request without idling takes less than a second. I am fine with the first request taking time, just not the small idle time causing the slowdown.
The slowdown is not unique to the client, if I keep making requests on one client then it stays quick on the other. Only when all are idle for 100 seconds does it slowdown.
Here are some changes that I have tried:
Setting HttpClient to a singleton and not disposing it using a using() block
Setting ServicePointManager.MaxServicePointIdleTime to a higher value since by default it is 100 seconds. Since the time period is the same as mine I thought this was the issue but it did not solve it.
Setting a higher ServicePointManager.DefaultConnectionLimit
Default proxy settings set via web.config
using await instead of httpClient.SendAsync(request).Result
It is not related to IIS application pool recycling since the default there is set to 20mn and the rest of the application remains quick.
The requests are to a web service which communicates with AWS S3 to get files. I am at a loss for ideas at this point and all my research has led me to the above points that I already tried. Any ideas would be appreciated!
Here is the method:
`
//get httpclient singleton or create
var httpClient = HttpClientProvider.FileServiceHttpClient;
var queryString = string.Format("?key={0}", key);
var request = new HttpRequestMessage(HttpMethod.Get, queryString);
var response = httpClient.SendAsync(request).Result;
if (response.IsSuccessStatusCode)
{
var metadata = new Dictionary<string, string>();
foreach (var header in response.Headers)
{
//grab tf headers
if (header.Key.StartsWith(_metadataHeaderPrefix))
{
metadata.Add(header.Key.Substring(_metadataHeaderPrefix.Length), header.Value.First());
}
}
var virtualFile = new VirtualFile
{
QualifiedPath = key,
FileStream = response.Content.ReadAsStreamAsync().Result,
Metadata = metadata
};
return virtualFile;
}
return null;
The default idle timeout is about 1-2 mins. After that, the client has to re handshake with the server. So, you will find that it will be slow after 100s.
You could use socket handler to extend the idle timeout.
var socketsHandler = new SocketsHttpHandler
{
PooledConnectionIdleTimeout = TimeSpan.FromHours(27),//Actually 5 mins can be idle at maximum. Note that timeouts longer than the TCP timeout may be ignored if no keep-alive TCP message is set at the transport level.
MaxConnectionsPerServer = 10
};
client = new HttpClient(socketsHandler);
As you can see, although I set the idle timeout to 27 hours, but actually it just keep 5 mins alive.
So, finally I just call the target endpoint using the same HttpClient every 1 min. In this case, there is always an established connection. You could use netstat to check that. It works fine.
I have written an asynchronous HttpModule which logs all the request coming to a website.
When the request arrives at the website, the custom http module calls the WebAPI to log the information to the database. .Net 4.5/Visual studio with IIS Express.
////////////Custom HTTP module////////////////////
public class MyHttpLogger : IHttpModule
{
public void Init(HttpApplication httpApplication)
{
EventHandlerTaskAsyncHelper taskAsyncHelper = new EventHandlerTaskAsyncHelper(LogMessage);
httpApplication.AddOnBeginRequestAsync(taskAsyncHelper.BeginEventHandler, taskAsyncHelper.EndEventHandler);
}
private async Task LogMessage(object sender, EventArgs e)
{
var app = (HttpApplication)sender;
var ctx = app.Context;
//use default credentials aka Windows Credentials
HttpClientHandler handler = new HttpClientHandler()
{
UseDefaultCredentials = true
};
using (var client = new HttpClient(handler))
{
client.BaseAddress = new Uri("http://localhost:58836/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var actvity = new Activities() { AppId = "TEST123", ActivityId = 10, CreatedBy = 1 };
await client.PostAsJsonAsync("api/activity", actvity);
}
}
Simplified WebAPI code for “api/activity”.
public async Task<HttpResponseMessage>Post(Activities activity)
{
await Task.Delay(30000);
// Database logging code goes here….
return new HttpResponseMessage(HttpStatusCode.Created);
}
The question is: when the PostAsJsonAsync is getting executed the code stops there. This is correct as the code has to wait. But the calling routine is not getting continued asynchronously as expected and the website response is slower by 30 seconds.
What is wrong with this code? Hooking an async HttpModule should not interfere in the flow of the Http application? Currently when I visit the website the code is getting blocked.
Now, you have to understand the difference between HTTP request and local application. HTTP is totally stateless. Basically you send out a request, the server process it and send back the response. The server does NOT keep any state info about the client, it doesn't have any idea who the client is at all. So in your case, browser sends out a request -> server got it, waited for 30 sec to process it and then sends back result -> browser got the response and this is the point where the browser shows the result. I am assuming what you are trying to do is, browser sends a request, and then you expect the browser to display something and then after 30 sec the server finishes processing and you want to display something else. This is NOT possible to do with await because of the reason mentioned above. What you should do is to write some javascript codes to send out the request with some identifier, and then ask the server every x seconds to see if task{ID} has finished, if so whats the result.
I have a webservice that gets a list from client and inserts it to database. Client has a windows service that is sending a list per 10 seconds. But there is a problem. What if it cannot reach to webservice(server). I should not lost any of the data. I decided to save data to a txt or binary if server is not reachable, and then upload them after the server starts to run. However, how can I decide whether the webservice is unavaliable. If I store the data to a file in a catch block, it will store when ever it gets an error, not only webservice unavaliable error. Any advice?
You can make an http request on the service's endpoint url and check if everything is ok :
var url = "http://....";
//OR
var url = service_object.Url;
var request = (HttpWebRequest)WebRequest.Create(url);
request.Timeout = 2000; //timeout 20 seconds
HttpWebResponse response = null;
try
{
response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode != HttpStatusCode.OK)
{
throw new ApplicationException(response.StatusDescription);
}
}
catch (ApplicationException ex)
{
//Do what you want here, create a file for example...
}
I'd introduce a queuing system (such as MSMQ or NServiceBus) so that the windows service only needs to place message(s) into the queue and something else (co-located with the web service) can dequeue messages and apply them (either directly or via the web service methods).
When everything is up and running, it shouldn't introduce much more overhead over your current (direct) web service call, and when the web service is down, everything just builds up in the queuing system.