Retry on 408 Timeout from Azure Table Storage service - c#

We are using Azure Table Storage and are getting occasional 408 Timeouts when performing an InsertOrMerge operation. In this case we would like to retry, but it appears that the retry policy is not being followed for these errors.
This is a class we use to handle the table interaction. The method GetFooEntityAsync tries to retrieve the entity from Table Storage. If it cannot, it creates a new FooEntity and adds it to the table (mapping to a FooTableEntity).
public class FooTableStorageBase
{
private readonly string tableName;
protected readonly CloudStorageAccount storageAccount;
protected TableRequestOptions DefaultTableRequestOptions { get; }
protected OperationContext DefaultOperationContext { get; }
public CloudTable Table
{
get
{
return storageAccount.CreateCloudTableClient().GetTableReference(tableName);
}
}
public FooTableStorage(string tableName)
{
if (String.IsNullOrWhiteSpace(tableName))
{
throw new ArgumentNullException(nameof(tableName));
}
this.tableName = tableName;
storageAccount = CloudStorageAccount.Parse(ConnectionString);
ServicePoint tableServicePoint = ServicePointManager.FindServicePoint(storageAccount.TableEndpoint);
tableServicePoint.UseNagleAlgorithm = false;
tableServicePoint.ConnectionLimit = 100; // Increasing connection limit from default of 2.
DefaultTableRequestOptions = new TableRequestOptions()
{
PayloadFormat = TablePayloadFormat.JsonNoMetadata,
MaximumExecutionTime = TimeSpan.FromSeconds(1),
RetryPolicy = new OnTimeoutRetry(TimeSpan.FromMilliseconds(250), 3),
LocationMode = LocationMode.PrimaryOnly
};
DefaultOperationContext = new OperationContext();
DefaultOperationContext.Retrying += (sender, args) =>
{
// This is never executed.
Debug.WriteLine($"Retry policy activated in {this.GetType().Name} due to HTTP code {args.RequestInformation.HttpStatusCode} with exception {args.RequestInformation.Exception.ToString()}");
};
DefaultOperationContext.RequestCompleted += (sender, args) =>
{
if (args.Response == null)
{
// This is occasionally executed - we want to retry in this case.
Debug.WriteLine($"Request failed in {this.GetType().Name} due to HTTP code {args.RequestInformation.HttpStatusCode} with exception {args.RequestInformation.Exception.ToString()}");
}
else
{
Debug.WriteLine($"{this.GetType().Name} operation complete: Status code {args.Response.StatusCode} at {args.Response.ResponseUri}");
}
};
Table.CreateIfNotExists(DefaultTableRequestOptions, DefaultOperationContext);
}
public async Task<FooEntity> GetFooEntityAsync()
{
var retrieveOperation = TableOperation.Retrieve<FooTableEntity>(FooTableEntity.GenerateKey());
var tableEntity = (await Table.ExecuteAsync(retrieveOperation, DefaultTableRequestOptions, DefaultOperationContext)).Result as FooTableEntity;
if (tableEntity != null)
{
return tableEntity.ToFooEntity();
}
var fooEntity = CalculateFooEntity();
var insertOperation = TableOperation.InsertOrMerge(new FooTableEntity(fooEntity));
var executeResult = await Table.ExecuteAsync(insertOperation);
if (executeResult.HttpStatusCode == 408)
{
// This is never executed.
Debug.WriteLine("Got a 408");
}
return fooEntity;
}
public class OnTimeoutRetry : IRetryPolicy
{
int maxRetryAttempts = 3;
TimeSpan defaultRetryInterval = TimeSpan.FromMilliseconds(250);
public OnTimeoutRetry(TimeSpan deltaBackoff, int retryAttempts)
{
maxRetryAttempts = retryAttempts;
defaultRetryInterval = deltaBackoff;
}
public IRetryPolicy CreateInstance()
{
return new OnTimeoutRetry(TimeSpan.FromMilliseconds(250), 3);
}
public bool ShouldRetry(int currentRetryCount, int statusCode, Exception lastException, out TimeSpan retryInterval, OperationContext operationContext)
{
retryInterval = defaultRetryInterval;
if (currentRetryCount >= maxRetryAttempts)
{
return false;
}
// Non-retryable exceptions are all 400 ( >=400 and <500) class exceptions (Bad gateway, Not Found, etc.) as well as 501 and 505.
// This custom retry policy also retries on a 408 timeout.
if ((statusCode >= 400 && statusCode <= 500 && statusCode != 408) || statusCode == 501 || statusCode == 505)
{
return false;
}
return true;
}
}
}
When calling GetFooEntityAsync(), occasionally the "Request failed" line will be executed. When inspecting the values args.RequestInformation.HttpStatusCode = 408. However:
Debug.WriteLine("Got a 408"); within the GetFooEntity method is never executed.
Debug.WriteLine($"Retry policy activated... within the DefaultOperationContext.Retrying delegate is never executed (I would expect this to be executed twice - is this not retrying?).
DefaultOperationContext.RequestResults contains a long list of results (mostly with status codes 404, some 204s).
According to this (rather old) blog post, exceptions with codes between 400 and 500, as well as 501 and 505 are non-retryable. However a timeout (408) is exactly the situation we would want a retry. Perhaps I need to write a custom retry policy for this case.
I don't fully understand where the 408 is coming from, as I can't find it in the code other than when the RequestCompleted delegate is invoked. I have been trying different settings for my retry policy without luck. What am I missing here? How can I get the operation to retry on a 408 from table storage?
EDIT: I have updated the code to show the custom retry policy that I implemented, to retry on 408 errors. However, it seems that my breakpoints on retry are still not being hit, so it appears the retry is not being triggered. What could be the reason my retry policy is not being activated?

Related

How can I cancel an asynchronous task after a given time and how can I restart a failed task?

How can I cancel an asynchronous task when it takes very long to complete it or if it will probably never complete? Is it possible to use a given time(for example 10 seconds) for each task and when it doesn't complete in this given time, then the task will automatically be cancelled?
Is it possible to restart a task or create the same task again after it failed? What can I do if one of the tasks in a task list fails? Is it possible to only restart the failed task?
In my code, playerCountryDataUpdate should only be executed after each task in TasksList1 completed without error or exception. I want to restart a task when it fails. When the same task fails again, then don't restart it and display an error message on the screen. How can I do that?
bool AllMethods1Completed = false;
bool AllMethods2Completed = false;
public async Task PlayerAccountDetails()
{
var playerCountryDataGet = GetPlayerCountryData();
var playerTagsData = GetPlayerTagsData();
var TasksList1 = new List<Task> { playerCountryDataGet, playerTagsData };
try
{
await Task.WhenAll(TasksList1);
AllMethods1Completed = true;
}
catch
{
AllMethods1Completed = false;
}
if (AllMethods1Completed == true)
{
var playerCountryDataUpdate = UpdatePlayerCountryData("Germany", "Berlin");
var TasksList2 = new List<Task> { playerCountryDataUpdate };
try
{
await Task.WhenAll(TasksList2);
AllMethods2Completed = true;
}
catch
{
AllMethods2Completed = false;
}
}
}
private async Task GetPlayerTagsData()
{
var resultprofile = await PlayFabServerAPI.GetPlayerTagsAsync(new PlayFab.ServerModels.GetPlayerTagsRequest()
{
PlayFabId = PlayerPlayFabID
});
if (resultprofile.Error != null)
Console.WriteLine(resultprofile.Error.GenerateErrorReport());
else
{
if ((resultprofile.Result != null) && (resultprofile.Result.Tags.Count() > 0))
PlayerTag = resultprofile.Result.Tags[0].ToString();
}
}
private async Task GetPlayerCountryData()
{
var resultprofile = await PlayFabClientAPI.GetUserDataAsync(new PlayFab.ClientModels.GetUserDataRequest()
{
PlayFabId = PlayerPlayFabID,
Keys = null
});
if (resultprofile.Error != null)
Console.WriteLine(resultprofile.Error.GenerateErrorReport());
else
{
if (resultprofile.Result.Data == null || !resultprofile.Result.Data.ContainsKey("Country") || !resultprofile.Result.Data.ContainsKey("City"))
Console.WriteLine("No Country/City");
else
{
PlayerCountry = resultprofile.Result.Data["Country"].Value;
PlayerCity = resultprofile.Result.Data["City"].Value;
}
}
}
private async Task UpdatePlayerCountryData(string country, string city)
{
var resultprofile = await PlayFabClientAPI.UpdateUserDataAsync(new PlayFab.ClientModels.UpdateUserDataRequest()
{
Data = new Dictionary<string, string>() {
{"Country", country},
{"City", city}
},
Permission = PlayFab.ClientModels.UserDataPermission.Public
});
if (resultprofile.Error != null)
Console.WriteLine(resultprofile.Error.GenerateErrorReport());
else
Console.WriteLine("Successfully updated user data");
}
You need to build a cancellation mechanism directly into the task itself. C# provides a CancellationTokenSource and CancellationToken classes to assist with this. https://learn.microsoft.com/en-us/dotnet/api/system.threading.cancellationtoken?view=netcore-3.1
Add an (optional) CancellationToken to your task's parameters. Then check the token at appropriate intervals to determine if the task needs to abort before it completes.
In the case of a long running query, it would be best to figure out how to break the query into chunks and then check the CancellationToken between queries.
private async Task GetPlayerXXXData(CancellationToken ct = null) {
int limit = 100;
int total = Server.GetPlayerXXXCount();
List<PlayerXXXData> results = new List<PlayerXXXData>();
while((ct == null || ct.IsCancellationRequested) && result.Count < total) {
result.AddRange(Server.GetPlayerXXXData(result.Count, limit));
}
return results;
}
Mind the above has no error handling in it; but you get the idea. You might consider making it faster (to start using the data) by implementing Deferred Execution with your own custom IEnumerable implementation. Then you can query one chunk and iterate over that chunk before querying for the next chunk. This could also help prevent you from loading too much into RAM - depending upon the number of records you are intending to process.
set timeout in your logic to suspend the task:
int timeout = 1000;
var task = SomeOperationAsync();
if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
// task completed within timeout
} else {
// timeout logic
}
Asynchronously wait for Task<T> to complete with timeout
and also put try catch blocks in a while loop with a flag until you want to retry
var retry=0;
while (retry<=3)
{
try{
await with timeout
raise timeout exception
}
catch(catch timeout exception here )
{
retry++;
if(retry ==3)
{
throw the catched exception here
}
}
}

Google API Client for .Net: Implement retry when a request fails

How can I implement retries in case a request that is part of a batch request fails when interacting with google's API. In their documentation, they suggest adding "Exponential Backoff" algorithm. I'm using the following code snippet in their documentation:
UserCredential credential;
using (var stream = new FileStream("client_secrets.json", FileMode.Open, FileAccess.Read))
{
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
new[] { CalendarService.Scope.Calendar },
"user", CancellationToken.None, new FileDataStore("Calendar.Sample.Store"));
}
// Create the service.
var service = new CalendarService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "Google Calendar API Sample",
});
// Create a batch request.
var request = new BatchRequest(service);
request.Queue<CalendarList>(service.CalendarList.List(),
(content, error, i, message) =>
{
// Put your callback code here.
});
request.Queue<Event>(service.Events.Insert(
new Event
{
Summary = "Learn how to execute a batch request",
Start = new EventDateTime() { DateTime = new DateTime(2014, 1, 1, 10, 0, 0) },
End = new EventDateTime() { DateTime = new DateTime(2014, 1, 1, 12, 0, 0) }
}, "YOUR_CALENDAR_ID_HERE"),
(content, error, i, message) =>
{
// Put your callback code here.
});
// You can add more Queue calls here.
// Execute the batch request, which includes the 2 requests above.
await request.ExecuteAsync();
Here is a simple helper class to make it easy to implement exponential backoff for a lot of the situations that Google talks about on their API error page: https://developers.google.com/calendar/v3/errors
How to Use:
Edit the class below to include your client secret and application name as you set up on https://console.developers.google.com
In the startup of your application (or when you ask the user to authorize), call GCalAPIHelper.Instance.Auth();
Anywhere you would call the Google Calendar API (eg Get, Insert, Delete, etc), instead use this class by doing: GCalAPIHelper.Instance.CreateEvent(event, calendarId); (you may need to expand this class to other API endpoints as your needs require)
using Google;
using Google.Apis.Auth.OAuth2;
using Google.Apis.Calendar.v3;
using Google.Apis.Calendar.v3.Data;
using Google.Apis.Services;
using Google.Apis.Util.Store;
using System;
using System.IO;
using System.Linq;
using System.Net;
using System.Text;
using System.Threading;
using static Google.Apis.Calendar.v3.CalendarListResource.ListRequest;
/*======================================================================================
* This file is to implement Google Calendar .NET API endpoints WITH exponential backoff.
*
* How to use:
* - Install the Google Calendar .NET API (nuget.org/packages/Google.Apis.Calendar.v3)
* - Edit the class below to include your client secret and application name as you
* set up on https://console.developers.google.com
* - In the startup of your application (or when you ask the user to authorize), call
* GCalAPIHelper.Instance.Auth();
* - Anywhere you would call the Google Calendar API (eg Get, Insert, Delete, etc),
* instead use this class by doing:
* GCalAPIHelper.Instance.CreateEvent(event, calendarId); (you may need to expand
* this class to other API endpoints as your needs require)
*======================================================================================
*/
namespace APIHelper
{
public class GCalAPIHelper
{
#region Singleton
private static GCalAPIHelper instance;
public static GCalAPIHelper Instance
{
get
{
if (instance == null)
instance = new GCalAPIHelper();
return instance;
}
}
#endregion Singleton
#region Private Properties
private CalendarService service { get; set; }
private string[] scopes = { CalendarService.Scope.Calendar };
private const string CLIENTSECRETSTRING = "YOUR_SECRET"; //Paste in your JSON client secret here. Don't forget to escape special characters!
private const string APPNAME = "YOUR_APPLICATION_NAME"; //Paste in your Application name here
#endregion Private Properties
#region Constructor and Initializations
public GCalAPIHelper()
{
}
public void Auth(string credentialsPath)
{
if (service != null)
return;
UserCredential credential;
byte[] byteArray = Encoding.ASCII.GetBytes(CLIENTSECRETSTRING);
using (var stream = new MemoryStream(byteArray))
{
credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
scopes,
Environment.UserName,
CancellationToken.None,
new FileDataStore(credentialsPath, true)).Result;
}
service = new CalendarService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = APPNAME
});
}
#endregion Constructor and Initializations
#region Private Methods
private TResponse DoActionWithExponentialBackoff<TResponse>(CalendarBaseServiceRequest<TResponse> request)
{
return DoActionWithExponentialBackoff(request, new HttpStatusCode[0]);
}
private TResponse DoActionWithExponentialBackoff<TResponse>(CalendarBaseServiceRequest<TResponse> request, HttpStatusCode[] otherBackoffCodes)
{
int delay = 100;
while (delay < 1000) //If the delay gets above 1 second, give up
{
try
{
return request.Execute();
}
catch (GoogleApiException ex)
{
if (ex.HttpStatusCode == HttpStatusCode.Forbidden || //Rate limit exceeded
ex.HttpStatusCode == HttpStatusCode.ServiceUnavailable || //Backend error
ex.HttpStatusCode == HttpStatusCode.NotFound ||
ex.Message.Contains("That’s an error") || //Handles the Google error pages like https://i.imgur.com/lFDKFro.png
otherBackoffCodes.Contains(ex.HttpStatusCode))
{
Common.Log($"Request failed. Waiting {delay} ms before trying again");
Thread.Sleep(delay);
delay += 100;
}
else
throw;
}
}
throw new Exception("Retry attempts failed");
}
#endregion Private Methods
#region Public Properties
public bool IsAuthorized
{
get { return service != null; }
}
#endregion Public Properties
#region Public Methods
public Event CreateEvent(Event eventToCreate, string calendarId)
{
EventsResource.InsertRequest eventCreateRequest = service.Events.Insert(eventToCreate, calendarId);
return DoActionWithExponentialBackoff(eventCreateRequest);
}
public Event InsertEvent(Event eventToInsert, string calendarId)
{
EventsResource.InsertRequest eventCopyRequest = service.Events.Insert(eventToInsert, calendarId);
return DoActionWithExponentialBackoff(eventCopyRequest);
}
public Event UpdateEvent(Event eventToUpdate, string calendarId, bool sendNotifications = false)
{
EventsResource.UpdateRequest eventUpdateRequest = service.Events.Update(eventToUpdate, calendarId, eventToUpdate.Id);
eventUpdateRequest.SendNotifications = sendNotifications;
return DoActionWithExponentialBackoff(eventUpdateRequest);
}
public Event GetEvent(Event eventToGet, string calendarId)
{
return GetEvent(eventToGet.Id, calendarId);
}
public Event GetEvent(string eventIdToGet, string calendarId)
{
EventsResource.GetRequest eventGetRequest = service.Events.Get(calendarId, eventIdToGet);
return DoActionWithExponentialBackoff(eventGetRequest);
}
public CalendarListEntry GetCalendar(string calendarId)
{
CalendarListResource.GetRequest calendarGetRequest = service.CalendarList.Get(calendarId);
return DoActionWithExponentialBackoff(calendarGetRequest);
}
public Events ListEvents(string calendarId, DateTime? startDate = null, DateTime? endDate = null, string q = null, int maxResults = 250)
{
EventsResource.ListRequest eventListRequest = service.Events.List(calendarId);
eventListRequest.ShowDeleted = false;
eventListRequest.SingleEvents = true;
eventListRequest.OrderBy = EventsResource.ListRequest.OrderByEnum.StartTime;
if (startDate != null)
eventListRequest.TimeMin = startDate;
if (endDate != null)
eventListRequest.TimeMax = endDate;
if (!string.IsNullOrEmpty(q))
eventListRequest.Q = q;
eventListRequest.MaxResults = maxResults;
return DoActionWithExponentialBackoff(eventListRequest);
}
public CalendarList ListCalendars(string accessRole)
{
CalendarListResource.ListRequest calendarListRequest = service.CalendarList.List();
calendarListRequest.MinAccessRole = (MinAccessRoleEnum)Enum.Parse(typeof(MinAccessRoleEnum), accessRole);
return DoActionWithExponentialBackoff(calendarListRequest);
}
public void DeleteEvent(Event eventToDelete, string calendarId, bool sendNotifications = false)
{
DeleteEvent(eventToDelete.Id, calendarId, sendNotifications);
}
public void DeleteEvent(string eventIdToDelete, string calendarId, bool sendNotifications = false)
{
EventsResource.DeleteRequest eventDeleteRequest = service.Events.Delete(calendarId, eventIdToDelete);
eventDeleteRequest.SendNotifications = sendNotifications;
DoActionWithExponentialBackoff(eventDeleteRequest, new HttpStatusCode[] { HttpStatusCode.Gone });
}
#endregion Public Methods
}
}
derekantrican has an answer that I based mine off of. Two things, if the resource is "notFound" waiting for it won't do any good. That's them responding to the request with the object not being found, so there's no need for back off. I'm not sure if there are other codes where I need to handle yet, but I will be looking at it closer. According to Google: https://cloud.google.com/iot/docs/how-tos/exponential-backoff all 5xx and 429 should be retried.
Also, Google wants this to be an exponential back off; not linear. So the code below handles it in a exponential way. They also want you to add a random amount of MS to the retry timeout. I don't do this, but it would be easy to do. I just don't think that it matters that much.
I also needed the requests to be async so I updated the work methods to this type. See derekantrican's examples on how to call the methods; these are just the worker methods. Instead of returning "default" on the notFound, you could also re-throw the exception and handle it upstream, too.
private async Task<TResponse> DoActionWithExponentialBackoff<TResponse>(DirectoryBaseServiceRequest<TResponse> request)
{
return await DoActionWithExponentialBackoff(request, new HttpStatusCode[0]);
}
private async Task<TResponse> DoActionWithExponentialBackoff<TResponse>(DirectoryBaseServiceRequest<TResponse> request, HttpStatusCode[] otherBackoffCodes)
{
int timeDelay = 100;
int retries = 1;
int backoff = 1;
while (retries <= 5)
{
try
{
return await request.ExecuteAsync();
}
catch (GoogleApiException ex)
{
if (ex.HttpStatusCode == HttpStatusCode.NotFound)
return default;
else if (ex.HttpStatusCode == HttpStatusCode.Forbidden || //Rate limit exceeded
ex.HttpStatusCode == HttpStatusCode.ServiceUnavailable || //Backend error
ex.Message.Contains("That’s an error") || //Handles the Google error pages like https://i.imgur.com/lFDKFro.png
otherBackoffCodes.Contains(ex.HttpStatusCode))
{
//Common.Log($"Request failed. Waiting {delay} ms before trying again");
Thread.Sleep(timeDelay);
timeDelay += 100 * backoff;
backoff = backoff * (retries++ + 1);
}
else
throw ex; // rethrow exception
}
}
throw new Exception("Retry attempts failed");
}

Calling an asynchronous function within a for loop typescript and nhibernate deadlock

I am writing an http post call inside a for loop with typescript.
While debugging the method in the backend, I noticed that he treats the requests simultaneously.
For example, let's say the server must run 2 methods M1() then M2() per single request. For the test case with n = 2. The server executes M1() for request 1, M1() for request 2, then M2() for request 2 and finally M2() for request 2.
Then after the _session.commit() an exception is thrown in the method intercept. the exception description is:
NHibernate.StaleObjectStateException: 'Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [Server.Model.Identity.ApplicationRole#3]'
Code:
public calculate(index?: number): void {
for (var i = 0; i < this.policy.coverages.length; i++) {
this.callCalculate(i);
}
}
public callCalculate(i: number): void {
this.premiumCalculationParams = this.policy.createPremiumCalculationParams();
if (!this.premiums) {
this.premiums = new Array(this.policy.coverages.length);
}
this.offerService.calculatePremiums(this.policy, i).then((result: any) => {
this.premiums[i] = new Array<Premium>();
this.surpremiums = new Array<PremiumResult>();
if (result.data && result.data.premiumTable && result.data.premiumTable.premiumPayment && result.data.premiumTable.premiumPayment.premiums && result.data.premiumTable.premiumPayment.premiums.length > 0) {
_.each(result.data.premiumTable.premiumPayment.premiums, (premiumValue: any) => {
let premium: Premium = new Premium();
premium.setPremium(premiumValue);
this.premiums[i].push(premium);
this.policy.getCoverage(i).premiumPayment.premiums = angular.copy(this.premiums[i]);
});
if (result.data && result.data.results && result.data.results.length > 0) {
_.each(result.data.results, (premiumValuel: any) => {
let sp = new PremiumResult();
sp.setPremiumResult(premiumValuel);
sp.premiums = new Array<Premium>();
_.each(premiumValuel.premiums, (premiumValue: any) => {
let premium: Premium = new Premium();
premium.setPremium(premiumValue);
sp.premiums.push(premium);
});
this.surpremiums.push(sp);
});
}
console.log(this.surpremiums);
}
if (result.data && result.data.premiumTable && result.data.premiumTable.messageList && result.data.premiumTable.messageList.messages && result.data.premiumTable.messageList.messages.length > 0) {
_.each(result.data.premiumTable.messageList.messages, (message: any) => {
let messageType: any = MessageType[message.messageLevel.toString()];
this.messages.push(new Message(messageType, "premiums", message.messageContent, this.premiums[i]));
});
}
}, (err: any) => {
this.premiums[i] = null;
this.surpremiums = null;
if (err && err.data && err.data.modelState) {
for (var key in err.data.modelState) {
var model = err.data.modelState[key];
_.each(model, (state: string) => {
this.$log.debug(OfferControllerPrefix, "Calculation failed: " + state);
});
}
}
this.messages.push(new Message(MessageType.SVR_ERROR, "premiumCalculationParams", this.jsTranslations.getTranslation(Constants.DEFAULT_ERROR_NL), this.policy));
});
}
public calculatePremiums(policy: Policy, selectedCoverageIndex : number): any {
var uri = this.uriService.buildURI("Policy/Calculate");
var data = {
'policy': policy,
'selectedCoverageIndex': selectedCoverageIndex
};
return this.$http.post<any>(uri, data);
}
How to solve this issue ?
To chain two asynchronous operations, return values to their .then methods:
function M1(data) {
return $http.post(url1, data);
}
function M2(data) {
return $http.post(url2, data);
}
Chain the two functions like so:
function M1M2 () {
var promise = M1(data);
var promise2 = promise.then(function(data) {
var data2 = fn(data);
var m2Promise = M2(data2);
return m2Promise;
});
return promise2;
}
Because calling the .then method of a promise returns a new derived promise, it is easily possible to create a chain of promises.
It is possible to create chains of any length and since a promise can be resolved with another promise (which will defer its resolution further), it is possible to pause/defer resolution of the promises at any point in the chain. This makes it possible to implement powerful APIs.
For more information, see AngularJS $q Service API Reference - chaining promises.

RabbitMQ response is been lost in controller

Good evening everyone, I've got a web app written using .NET and a mobile app.
I'm sending some values to rabbitMQ server through my web app and this is working fine, i put it in a queue but when the mobile app accepts the request, i don't get the returned value.
Here is my controller
public async Task<ActionResult> GetCollect(int id)
{
int PartnerId = 0;
bool SentRequest = false;
try
{
SentRequest = await RuleRabbitMQ.SentRequestRule(id);
if(SentRequest )
{
PartnerId = await RuleRabbitMQ.RequestAccepted();
}
}
catch (Exception Ex)
{
}
}
This is my RabbitMQ class
public class InteractionRabbitMQ
{
public async Task<bool> SentRequestRule(int id)
{
bool ConnectionRabbitMQ = false;
await Task.Run(() =>
{
try
{
ConnectionFactory connectionFactory = new ConnectionFactory()
{
//credentials go here
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.QueueDeclare("SolicitacaoSameDay", true, false, false, null);
string rpcResponseQueue = channel.QueueDeclare().QueueName;
string correlationId = Guid.NewGuid().ToString();
IBasicProperties basicProperties = channel.CreateBasicProperties();
basicProperties.ReplyTo = rpcResponseQueue;
basicProperties.CorrelationId = correlationId;
byte[] messageBytes = Encoding.UTF8.GetBytes(string.Concat(" ", id.ToString()));
channel.BasicPublish("", "SolicitacaoSameDay", basicProperties, messageBytes);
channel.Close();
connection.Close();
if (connection != null)
{
ConnectionRabbitMQ = true;
}
else
{
ConnectionRabbitMQ = false;
}
}
catch (Exception Ex)
{
throw new ArgumentException($"Thre was a problem with RabbitMQ server. " +
$"Pleaser, contact the support with Error: {Ex.ToString()}");
}
});
return ConnectionRabbitMQ;
}
public async Task<int> RequestAccepted()
{
bool SearchingPartner= true;
int PartnerId = 0;
await Task.Run(() =>
{
try
{
var connectionFactory = new ConnectionFactory()
{
// credentials
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.BasicQos(0, 1, false);
var eventingBasicConsumer = new EventingBasicConsumer(channel);
eventingBasicConsumer.Received += (sender, basicDeliveryEventArgs) =>
{
string Response = Encoding.UTF8.GetString(basicDeliveryEventArgs.Body, 0, basicDeliveryEventArgs.Body.Length);
channel.BasicAck(basicDeliveryEventArgs.DeliveryTag, false);
if(!string.IsNullOrWhiteSpace(Response))
{
int Id = Convert.ToInt32(Response);
PartnerId = Id > 0 ? Id : 0;
SearchingPartner = false;
}
};
channel.BasicConsume("SolicitacaoAceitaSameDay", false, eventingBasicConsumer);
}
catch (Exception Ex)
{
// error message
}
});
return PartnerId ;
}
I am not sure this works, can't build an infrastructure to test this quickly, but - your issue is that the RequestAccepted returns a Task which completes before the Received event is caught by the Rabbit client library.
Syncing the two could possibly resolve the issue, note however that this could potentially make your code waiting very long for (or even - never get) the response.
public Task<int> RequestAccepted()
{
bool SearchingPartner= true;
int PartnerId = 0;
var connectionFactory = new ConnectionFactory()
{
// credentials
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.BasicQos(0, 1, false);
TaskCompletionSource<int> tcs = new TaskCompletionSource<int>();
var eventingBasicConsumer = new EventingBasicConsumer(channel);
eventingBasicConsumer.Received += (sender, basicDeliveryEventArgs) =>
{
string Response = Encoding.UTF8.GetString(basicDeliveryEventArgs.Body, 0, basicDeliveryEventArgs.Body.Length);
channel.BasicAck(basicDeliveryEventArgs.DeliveryTag, false);
if(!string.IsNullOrWhiteSpace(Response))
{
int Id = Convert.ToInt32(Response);
PartnerId = Id > 0 ? Id : 0;
SearchingPartner = false;
tcs.SetResult( PartnerId );
}
};
channel.BasicConsume("SolicitacaoAceitaSameDay", false, eventingBasicConsumer);
return tcs.Task;
}
There are couple of issues with this approach.
First, no error handling.
Then, what if the event is sent by the RMQ before the consumer subscribes to it? The consumer will block as it will never receive anything back.
And last, I don't think RMQ consumers are ever intended to be created in every request to your controller and then never disposed. While this could work on your dev box where you create a couple of requests manually, it won't probably ever scale to fix a scenario where dozens/hundreds of concurrent users hit your website and multiple RMQ consumers compete one against the other.
I don't think there is an easy way around it other than completely separate the consumer out of your web app, put it in a System Service or a Hangfire job and let it get responses to all possible requests and from the cache - serve responses to web requests.
This is a pure speculation, though, based on my understanding of what you try to do. I could be wrong here, of course.
byte[] messageBytes = Encoding.UTF8.GetBytes(string.Concat(" ", idColeta.ToString()));
I reckon 'idColeta' is blank.

Redis Cache getting timeout with sync requests and slow response with async requests only in async method

First of all I am using Azure Redis Cache service and StackExchange.Redis(1.0.371) client with my MVC 5 and Web Api 2 app. I am getting very interesting behaviors. getting timeout with sync requests and slow response when I convert my sync calling with async. Let me give you an example. Here is my RedisCacheService,
public class RedisCacheService : ICacheService
{
private readonly IDatabase _cache;
private static readonly ConnectionMultiplexer ConnectionMultiplexer;
static RedisCacheService()
{
var connection = ConfigurationManager.AppSettings["RedisConnection"];
ConnectionMultiplexer = ConnectionMultiplexer.Connect(connection);
}
public RedisCacheService(ISettings settings)
{
_cache = ConnectionMultiplexer.GetDatabase();
}
public bool Exists(string key)
{
return _cache.KeyExists(key);
}
public Task<bool> ExistsAsync(string key)
{
return _cache.KeyExistsAsync(key);
}
public void Save(string key, string value, int timeOutInMinutes)
{
var ts = TimeSpan.FromMinutes(timeOutInMinutes);
_cache.StringSet(key, value, ts);
}
public Task SaveAsync(string key, string value, int timeOutInMinutes)
{
var ts = TimeSpan.FromMinutes(timeOutInMinutes);
return _cache.StringSetAsync(key, value, ts);
}
public string Get(string key)
{
return _cache.StringGet(key);
}
public async Task<string> GetAsync(string key)
{
string result = null;
var val = await _cache.StringGetAsync(key);
if(val.HasValue)
{
result = val;
}
return result;
}
......................................................................
}
and here is my method which invokes the cache.
public async Task<IList<XX>> GetXXXXX(XXXX)
{
var key = string.Format("{0}/XX{1}_{2}", XXXX, XX, XX);
var xxx = _cacheService.Get(key);
if (xxx != null)
{
return JsonConvert.DeserializeObject<IList<XX>>(xxx);
}
var x = await _repository.GetXXXXX(XXXXX);
var contents = JsonConvert.SerializeObject(x);
_cacheService.Save(key, JsonConvert.SerializeObject(x));
return x;
}
The above method always gives me,
System.TimeoutException
Timeout performing GET XXX, inst: 0, mgr: Inactive, queue: 3, qu=2, qs=1, qc=0, wr=1/1, in=0/0
or
System.TimeoutException
Timeout performing SETEX XXX, inst: 0, mgr: Inactive, queue: 2, qu=1, qs=1, qc=0, wr=1/1, in=0/0
Let change it to async,
public async Task<IList<XX>> GetXXXXX(XXXX)
{
var key = string.Format("{0}/XX{1}_{2}", XXXX, XX, XX);
var xxx = await _cacheService.GetAsync(key);
if (xxx != null)
{
return JsonConvert.DeserializeObject<IList<XX>>(xxx);
}
var x = await _repository.GetXXXXX(XXXXX);
var contents = JsonConvert.SerializeObject(x);
await _cacheService.SaveAsync(key, JsonConvert.SerializeObject(x));
return x;
}
This above method works but it takes at least 5-10 seconds. I mean 10 seconds if no cache is available and 5 seconds if cache is available.
Now lets make my method completely sync,
public async Task<IList<XX>> GetXXXXX(XXXX)
{
var key = string.Format("{0}/XX{1}_{2}", XXXX, XX, XX);
var xxx = _cacheService.Get(key);
if (xxx != null)
{
return Task.FromResult(JsonConvert.DeserializeObject<IList<XX>>(xxx));
}
//var x = await _repository.GetXXXXX(XXXXX);
var x = (IList<XX>)new List<XX>();
var contents = JsonConvert.SerializeObject(x);
_cacheService.Save(key, JsonConvert.SerializeObject(x));
return Task.FromResult(x);
}
Please note the comment to call a repository method. The above method work instantly, means I get result within less than 1 seconds. Clearly something wrong with Azure or StackExcahge.Redis client.
Update: My last approach(async) is also working like a charm(fast and no error),
public async Task<IList<XX>> GetXXXXX(XXXX)
{
var key = string.Format("{0}/XX{1}_{2}", XXXX, XX, XX);
var xxx = await _cacheService.GetAsync(key);
if (xxx != null)
{
return JsonConvert.DeserializeObject<IList<XX>>(xxx);
}
//var x = await _repository.GetXXXXX(XXXXX);
var x = (IList<XX>)new List<XX>();
var contents = JsonConvert.SerializeObject(x);
await _cacheService.SaveAsync(key, JsonConvert.SerializeObject(x));
return x;
}
Note that still I have commented the repository code.
Looks like these timeouts in Azure might be an open issue.. Have you tried this code against a local (non-Azure) server? Do you get the same results?
I have solved timeout issuse in Redis Cache by setting the syncTimeout property in the redis cache connection string
e.g.
var conn = ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=password,syncTimeout=5000");
Reference for connection string properties https://stackexchange.github.io/StackExchange.Redis/Configuration

Categories

Resources