Azure Functions outgoing HTTP call not working - c#

I have an Azure Servicebus function trigger which is supposed to call an external endpoint whenever an event is placed in its designated queue. However, when the function picks up the event and is going to make the outgoing request it fails with the following log message
Executed 'AzureServicebusTrigger' (Failed, Id=cb218eb5-300b-40c0-8a4e-81f977b9cd5c)
An attempt was made to access a socket in a way forbidden by its access permissions`
My function:
public static class AzureServicebusTrigger
{
private static HttpClient _httpClient = new HttpClient();
[FunctionName("AzureServicebusTrigger")]
public static async Task Run([ServiceBusTrigger("receiver", Connection = "ServiceBusConnectionString")]
string myQueueItem, ILogger log)
{
var request = await _httpClient.GetAsync("http://<ip>:7000");
var result = await request.Content.ReadAsStringAsync();
}
}
What is this permission?
And how do I enable such that outgoing requests from my function become possible?

I have an educated guess. I've seen this before with Web Apps. It can occur if you've exhausted the number of outbound connections available for your chosen SKU in your pricing tier.
I'm guessing you're on the Azure Function consumption plan. If so, you are limited to 600 active connections (see here). So if you're seeing this problem intermittently, try moving to the Premium tier or an App Service plan of Standard or higher.

I think Rob Reagan is correct. To help you accept his answer as correct I suggest the following.
Can you inspect how many function execution and servers (function host instances) did you have at the moment of the failure. You can do that by adding Application Insights and writing such a query.
performanceCounters
| where cloud_RoleName =~ 'YourFunctionName'
| where timestamp > ago(30d)
| distinct cloud_RoleInstance
Or by reproducing the load and watching live monitor.

Related

Azure Service Bus Queue Trigger function is called more than once when deployed

I have two Azure Functions. One is HTTP triggered, let's call it the API and the other one ServiceBusQueue triggered, and let's call this one the Listener.
The first one (the API) puts an HTTP request into a queue and the second one (the Listener) picks that up and processes that. The functions SDK version is: 3.0.7.
I have two projects in my solution for this. One which contains the Azure Functions and the other one which has the services. The API once triggered, calls a service from the other project that puts the message into the queue. And the Listener once received a message, calls a service from the service project to process the message.
Any long-running process?
The Listener actually performs a lightweight workflow and it all happens very quickly considering the amount of work it executes. The average time of execution is 90 seconds.
What's the queue specs?
The queue that the Listener listens to and is hosted in an Azure ServiceBus namespace has the following properties set:
Max Delivery Count: 1
Message time to live: 1 day
Auto-delete: Never
Duplicate detection window: 10 min
Message lock duration: 5 min
And here a screenshot for it:
The API puts the HTTP request into the queue using the following method:
public async Task ProduceAsync(string queueName, string jsonMessage)
{
jsonMessage.NotNull();
queueName.NotNull();
IQueueClient client = new QueueClient(Environment.GetEnvironmentVariable("ServiceBusConnectionString"), queueName, ReceiveMode.PeekLock)
{
OperationTimeout = TimeSpan.FromMinutes(5)
};
await client.SendAsync(new Message(Encoding.UTF8.GetBytes(jsonMessage)));
if (!client.IsClosedOrClosing)
{
await client.CloseAsync();
}
}
And the Listener (the service bus queue triggered azure function), has the following code to process the message:
[FunctionName(nameof(UpdateBookingCalendarListenerFunction))]
public async Task Run([ServiceBusTrigger(ServiceBusConstants.UpdateBookingQueue, Connection = ServiceBusConstants.ConnectionStringKey)] string message)
{
var data = JsonConvert.DeserializeObject<UpdateBookingCalendarRequest>(message);
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarListenerFunction)} picked up a message at {DateTime.Now}. Data: {data}");
await _workflowHandler.HandleAsync(data);
}
The Problem
The Listener function processes the same message 3 times! And I have no idea why! I've Googled and read through a few of StackOverFlow threads such as this one. And it looks like that everybody advising to ensure lock duration is long enough for the process to get executed completely. Although, I've put in 5 minutes for the lock, yet, the problem keeps coming. I'd really appreciate any help on this.
Just adding this in here so might be helpful for some others.
After some more investigations I've realized that in my particular case, the issue was regardless of the Azure Functions and Service Bus. In my workflow handler that the UpdateBookingCalendarListenerFunction sends messages to, I was trying to call some external APIs in a parallel approach, but, for some unknown reasons (to me) the handler code was calling off the external APIs one additional time, regardless of how many records it iterates over. The below code shows how I had implemented the parallel API calls and the other code shows how I've done it one by one that eventually led to a resolution for the issue I had.
My original code - calling APIs in parallel
public async Task<IEnumerable<StaffMemberGraphApiResponse>> AddAdminsAsync(IEnumerable<UpdateStaffMember> admins, string bookingId)
{
var apiResults = new List<StaffMemberGraphApiResponse>();
var adminsToAdd = admins.Where(ad => ad.Action == "add");
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarWorkflowDetailHandler)} Recognized {adminsToAdd.Count()} admins to add to booking with id: {bookingId}");
var addAdminsTasks = adminsToAdd.Select(admin => _addStaffGraphApiHandler.HandleAsync(new AddStaffToBookingGraphApiRequest
{
BookingId = bookingId,
DisplayName = admin.DisplayName,
EmailAddress = admin.EmailAddress,
Role = StaffMemberAllowedRoles.Admin
}));
if (addAdminsTasks.Any())
{
var addAdminsTasksResults = await Task.WhenAll(addAdminsTasks);
apiResults = _populateUpdateStaffMemberResponse.Populate(addAdminsTasksResults, StaffMemberAllowedRoles.Admin).ToList();
}
return apiResults;
}
And my new code without aggregating the API calls into the addAdminsTasks object and hence with no await Task.WhenAll(addAdminsTasks):
public async Task<IEnumerable<StaffMemberGraphApiResponse>> AddStaffMembersAsync(IEnumerable<UpdateStaffMember> members, string bookingId, string targetRole)
{
var apiResults = new List<StaffMemberGraphApiResponse>();
foreach (var item in members.Where(v => v.Action == "add"))
{
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarWorkflowDetailHandler)} Adding {targetRole} to booking: {bookingId}. data: {JsonConvert.SerializeObject(item)}");
apiResults.Add(_populateUpdateStaffMemberResponse.PopulateAsSingleItem(await _addStaffGraphApiHandler.HandleAsync(new AddStaffToBookingGraphApiRequest
{
BookingId = bookingId,
DisplayName = item.DisplayName,
EmailAddress = item.EmailAddress,
Role = targetRole
}), targetRole));
}
return apiResults;
}
I've investigated the first approach and the numbers of tasks were exact match of the number of the IEnumerable input, yet, the API was called one additional time. And within the _addStaffGraphApiHandler.HandleAsync, there is literally nothing than an HttpClient object that raises a POSTrequest. Anyway, using the second code has resolved the issue.

Azure Http trigger function app works only when stop and restart again

I have used Http trigger function app, and want to read message from service bus topic.
Install nuget - Microsoft.Azure.ServiceBus.
But it only read a messages only once second time when I trigger Http function messages comes null.
When I stop and restart function app and trigger it, it works fine for 1st time again for second trigger messages comes null.
Why such behavior?
[FunctionName("Test")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
string serviceBusConnectionString = Environment.GetEnvironmentVariable("ConnectionStringSettingName");
var messageReceiver = new MessageReceiver(serviceBusConnectionString, Environment.GetEnvironmentVariable("EntityPath"), ReceiveMode.PeekLock);
var messages = await messageReceiver.ReceiveAsync(500, TimeSpan.FromSeconds(1));
if (messages != null)
{
foreach (Message item in messages)
{
// process messages and complete it.
await messageReceiver.CompleteAsync(item.SystemProperties.LockToken);
I want to use Http Trigger only because I have to call this http function from other place so can not use service bus trigger. Currently only one place I'm calling fucntion app so ReceiveAsync should work
I think need to use ReceiveAsync and ReceiveAndDelete like below -
var messages = await messageReceiver.ReceiveAsync(500);
var messageReceiver = new MessageReceiver(serviceBusConnectionString, Environment.GetEnvironmentVariable("EntityPath"), ReceiveMode.ReceiveAndDelete);
when consuming a message you only get to read it once, so you are seeing the intended behaviour of service bus.
I would suggest you consider restructuring your solution to account for this.
Its difficult to provide a concrete solution as I don't know your full requirements but I suggest either changing this azure function into a service bus triggered function and have it write out the data somewhere that the logic app can query it easily such as a storage account
The second option would be to drop the function entirely and make use of the service bus trigger directly in your logic app. This is configured to poll the service bus every so often and will run repeatedly until the all the messages have been processed before waiting for the poll interval.
You need to use PeekAsync instead of ReceiveAsync.
Peek will keep the messages in the queue so that they can be handled by another process, where as receive marks them as being processed to prevent multiple delivery. Restarting your function app would rollback the receive which puts the message back on the queue.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.peekasync?view=azure-dotnet#Microsoft_Azure_ServiceBus_Core_MessageReceiver_PeekAsync

Azure Function App delay retry for azure service bus

First let me explain what I have. I have myself an Azure Service Bus with an Azure Function App. The Service Bus is setup to use SQL Filters to push specific message types into specific topics. Then using my Azure Function App these will get the newest message and then process it.
A basic example
1: I send a request to my EmailAPI
2: EmailAPI then pushing a new message into the Service Bus with a type of "Email"
3: The SQL Filter then sees the type is of "Email" and is placed into the email Topic in the Service Bux
4: The EmailListener Azure Function monitors the Service bus and notices a new message
5: Gather the Service Bus message and process it (basically just send the email using the information provided)
Now let's say for some reason the SMTP server connection is a little broken and some times we get a TimeOutException when attempting to send the email (EmailListener). What happens now when an exception is thrown, the Function App EmailListener will attempt to send it again instantly, no wait, it will just attempt to send it again. It will do this for a total of 10 times and then inform the Service Bus to place the message in the Dead Letter queue.
What I am attempting to do is when an exception is thrown (such as TimeOutException), we wait X amount of time before attempting to process the same message again. I have looked around at many different posts talking about the host.json and attempting to set those settings, but these have not worked. I have found a solution, however the solution requires your to create a clone of the message and push it back into the Service Bus and give it a delayed process time. I would prefer not to implement my own manual delay system, if Azure Service Bus / Function App can deal with retries itself.
The biggest issue I am having (which is probably down to my understanding) is who is at fault? Is it the Service Bus settings to handle the Retry Policy or is it the Azure Function App to deal with attempting to retry after X time.
I have provided a some code, but I feel code isn't really going to help explain my question.
// Pseudo code
public static class EmailListenerTrigger
{
[FunctionName("EmailListenerTrigger")]
public static void Run([ServiceBusTrigger("messages", "email", Connection = "ConnectionString")]string mySbMsg, TraceWriter log)
{
var emailLauncher = new EmailLauncher("SmtpAddress", "SmtpPort", "FromAddress");
try
{
emailLauncher.SendServiceBusMessage(mySbMsg);
}
catch(Exception ex)
{
log.Info($"Audit Log: {mySbMsg}, Excpetion: {ex.message}");
}
}
}
reference one: https://blog.kloud.com.au/2017/05/22/message-retry-patterns-in-azure-functions/ (Thread.Sleep doesn't seem like a good idea)
reference two: https://github.com/Azure/azure-functions-host/issues/2192 (Manually implemented retry)
reference three: https://www.feval.ca/posts/function-queue-retry/ (This seems to refer to queues when I am using topics)
reference four: Can the Azure Service Bus be delayed before retrying a message? (Talks about Defering the message, but then you need to manually get it back out the queue/topic.)
You might be able to solve your issue with the use of Durable Functions. There is for example a built-in method CallActivityWithRetryAsync() that can retry when the activity functions throws an exception.
https://learn.microsoft.com/en-us/sandbox/functions-recipes/durable-diagnostics#calling-activity-functions-with-retry
Your flow would probably something like this:
Service Bus triggered Function. This one starts an Orchestrator Function
The orchestrator calls your activity function (using the aforementioned method)
Your email sending is implemented in an Activity Function and can throw exceptions as needed
While there is no native support for what you want to do, it is still doable without having to do a lot of custom development. You can basically add a service bus output binding to your Azure function, that is connected to the same queue your function consumes messages from. Then, use a custom property to track the number of retries. The following is an example:
private static TimeSpan[] BackoffDurationsBetweenFailures = new[] { }; // add delays here
[FunctionName("retrying-poc")]
public async Task Run(
[ServiceBusTrigger("myQueue")] Message rawRequest,
IDictionary<string, object> userProperties,
[ServiceBus("myQueue")] IAsyncCollector<Message> collector)
{
var request = GetRequest(rawRequest);
var retryCount = GetRetryCount(userProperties);
var shouldRetry = false;
try
{
await _unreliableService.Call(request);
}
catch (Exception ex)
{
// I don't retry if it is a timeout, but that's my own choice.
shouldRetry = !(ex is TimeoutException) && retryCount < BackoffDurationsBetweenFailures.Length;
}
if (shouldRetry)
{
var retryMessage = new Message(rawRequest.Body);
retryMessage.UserProperties.Add("RetryCount", retryCount + 1);
retryMessage.ScheduledEnqueueTimeUtc = DateTime.UtcNow.Add(BackoffDurationsBetweenFailures[retryCount]);
await collector.AddAsync(retryMessage);
}
}
private MyBusinessObject GetRequest(Message rawRequest)
=> JsonConvert.DeserializeObject<MyBusinessObject>(Encoding.UTF8.GetString(rawRequest.Body));
private int GetRetryCount(IDictionary<string, object> properties)
=> properties.TryGetValue("RetryCount", out var value) && int.TryParse(value.ToString(), out var retryCount)
? retryCount
: 0;

How do DocumentDBAttribute bindings respond to throttling?

I have azure functions (C# v1 functions--non scripted) that use DocumentDBAttribute bindings for both reading and writing documents. How do those bindings respond to throttling in the following situations?
Writing an item by adding it to an ICollector
Reading an item by providing an Id
This is for functions v1.
First case:
//input binding
[DocumentDB(ResourceNames.APCosmosDBName,
ResourceNames.EpisodeOfCareCollectionName,
ConnectionStringSetting = "APCosmosDB",
CreateIfNotExists = true)] ICollector<EOC> eoc,
//...
eoc.Add(new EOC()); //what happens here if throttling is occuring?
Second case:
[DocumentDB(ResourceNames.ORHCasesDBName, ResourceNames.ORHCasesCollectionName, ConnectionStringSetting = "ORHCosmosDBCases", CreateIfNotExists = true, Id = "{id}")] string closedCaseStr,
Both input and output bindings use CosmosDB SDK which has the retry mechanism in place.
By default, SDK retries 9 times on a throttled result, after that, the exception is bubbled and you Function will error. Depending on the trigger type, it will fail HTTP call, put the message back to the queue etc.
The retries respect the timing recommendation returned by Cosmos DB:
When a client is sending requests faster than the allowed rate, the service will return HttpStatusCode 429 (Too Many Request) to rate limit the client. The current implementation in the SDK will then wait for the amount of time the service tells it to wait and retry after the time has elapsed.
At the moment, there is no way to configure the bindings with a policy other than default.

Async WCF Service

To make this easier to understand: We are using a database that does not have connection pooling built in. We are implementing our own connection pooler.
Ok so the title probably did not give the best description. Let me first Describe what I am trying to do. We have a WCF Service (hosted in a windows service) that needs to be able to take/process multiple requests at once. The WCF service will take the request and try to talk to (say) 10 available database connections. These database connections are all tracked by the WCF service and when processing are set to busy. If a request comes in and the WCF tries to talk to one of the 10 database connections and all of them are set to busy we would like the WCF service to wait for and return the response when it becomes available.
We have tried a few different things. For example we could have while loop (yuck)
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
string requestId = DbManager.RegisterRequest(clientId, program, args);
string response = null;
while(response == null)
{
response = DbManager.GetResponseForRequestId(requestId);
}
return response;
}
Basically the DbManager would track requests and responses. Each request would call the DbManager which would assign a request id. When a database connection is available it would assign (say) Responses[requestId] = [the database reponse]. The request would constantly ask the DbManager if it had a response and when it did the request could return it.
This has problems all over the place. We could possibly have multiple threads stuck in while loops for who knows how long. That would be terrible for performance and CPU usage. (To say the least)
We have also looked into trying this with events / listeners. I don't know how this would be accomplished so the code below is more of how we envisioned it working.
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
// register an event
// listen for that event
// when that event is called return its value
}
We have also looked into the DbManager having a queue or using things like Pulse/Monitor.Wait (which we are unfamiliar with).
So, the question is: How can we have an async WCF Operation that returns when it is able to?
WCF supports the async/await keywords in .net 4.5 http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx. You would need to do a bit of refactoring to make your ExecuteProgram async and make your DbManager request operation awaitable.
If you need your DbManager to manage the completion of these tasks as results become available for given clientIds, you can map each clientId to a TaskCompletionSource. The TaskCompletionSource can be used to create a Task and the DbManager can use the TaskCompletionSource to set the results.
This should work, with a properly-implemented async method to call:
[OperationContract]
string ExecuteProgram(string clientId, string program, string[] args)
{
Task<string> task = DbManager.DoRequestAsync(clientId, program, args);
return task.Result;
}
Are you manually managing the 10 DB connections? It sounds like you've re-implemented database connection pooling. Perhaps you should be using the connection pooling built-in to your DB server or driver.
If you only have a single database server (which I suspect is likely), then just use a BlockingCollection for your pool.

Categories

Resources