I am currently trying to write an ASP.NET Core API middleware which opens a SQL transaction before the underlying MVC action is executed. The transaction uses the Serializable isolation level, and is used by all SQL requests in the underlying MVC action. Then, when the MVC action exits:
if it succeeded, the middleware should commit the transction ;
if it failed with a serialization error, the middleware should reset everything and retry the MVC action (max. N times) ;
otherwise, the middleware should roll back the transaction and rethrow the error.
What I ended up with is:
public async Task InvokeAsync(HttpContext context, IDatabaseDriver databaseDriver)
{
context.Request.EnableBuffering(REQUEST_BUFFER_THRESHOLD);
int attempt = 0;
while (true)
{
attempt++;
try
{
await this._next(context);
await databaseDriver.Commit();
break;
}
catch (PostgresException ex)
when (ex.SqlState == PostgresErrorCodes.SerializationFailure &&
attempt <= MAX_RETRIES)
{
// SQL serialization failure: rollback and retry
await databaseDriver.Rollback();
context.Request.Body.Seek(0, SeekOrigin.Begin);
}
catch
{
// Unhandled error: rollback and throw
await databaseDriver.Rollback();
throw;
}
}
}
Unfortunately, this doesn't work properly because SQL serialization exeptions sometimes happen at the await databaseDriver.Commit() step, which is executed after the action returned successfully and started writing to the HTTP response stream. This results in duplicate JSON data in the response body.
What would be the best approach to solve this problem?
Let the API client reexecute the query (use a dedicated error code like HTTP 419) and never reexecute the ASP.NET action from a middleware. Using request buffering is a bad thing anyway and there might be other undesirable side effects when rerunning the MVC pipeline.
Commit the request transaction in each MVC action before it returns instead of doing so from the outer middleware.
Commit the transaction in a global action filter (only if no exception is thrown), which is run before the response stream is touched, thus avoiding the duplicate "commit" instruction in each action from the previous approach.
Somehow delay the ASP.NET MVC pipeline from writing to the response stream until the transaction is commited (is that even possible?).
Anything else.
I ended up solving this issue by resetting the response stream before each retry. This is normally not possible because the response stream is not seekable, but you can use a temporary MemoryStream to replace the response stream while the middleware is running:
public async Task InvokeAsync(HttpContext context, IDatabaseDriver databaseDriver)
{
context.Request.EnableBuffering(REQUEST_BUFFER_THRESHOLD);
Stream originalResponseBodyStream = context.Response.Body;
using var buffer = new MemoryStream();
context.Response.Body = buffer;
try
{
int attempt = 0;
while (true)
{
attempt++;
try
{
// Process request then commit transaction
await this._next(context);
await databaseDriver.Commit();
break;
}
catch (PostgresException ex)
when (ex.SqlState == PostgresErrorCodes.SerializationFailure &&
attempt <= MAX_RETRIES)
{
// SQL serialization failure: rollback and retry
await databaseDriver.Rollback();
context.Request.Body.Seek(0, SeekOrigin.Begin);
context.Response.Body.SetLength(0);
}
catch
{
// Unhandled error: rollback and throw
await databaseDriver.Rollback();
throw;
}
}
}
finally
{
context.Response.Body = originalResponseBodyStream;
buffer.Seek(0, SeekOrigin.Begin);
await buffer.CopyToAsync(context.Response.Body);
}
}
Related
With no luck, I tried configuring my ServiceBusClient to retry a message with a Fixed Delay of 10 seconds. I also tried Exponential retries configuration. However, the code is always retrying the message within a second or 2 and completely ignoring the configuration. It even ignores the MaxRetries and only retries 10 times, the value configured in Azure Portal for the queue. What am I doing wrong?
I am using The Azure.Messaging.ServiceBus library, NuGet package 7.0.0.
The code:
ServiceBusClient client = new ServiceBusClient(serviceBusConnectionString, new ServiceBusClientOptions()
{
RetryOptions = new ServiceBusRetryOptions()
{
Mode = ServiceBusRetryMode.Fixed,
Delay = TimeSpan.FromSeconds(10),
MaxDelay = TimeSpan.FromMinutes(3),
MaxRetries = 30
}
});
ServiceBusProcessor processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
// throwing an exception in MyMessageHandlerAsync on purpose
// to test out the retries configuration
processor.ProcessMessageAsync += MyMessageHandlerAsync;
// The uncaught exception causes this method to execute.
// Processing is attempted 10 times with
// virtually no delay between each attempt.
// After the 10th attempt, the message goes to deadletter,
// which is expected.
processor.ProcessErrorAsync += MyErrorHandler;
I'm adding more to this question after receiving the 1st response:
Currently, MyMessageHandlerAsync is:
private async Task MyMessageHandlerAsync(EventArgs eventArgs)
{
var args = (ProcessMessageEventArgs)eventArgs;
var body = args.Message.Body.ToString();
// ...
// process body
// ...
await args.CompleteMessageAsync(args.Message);
}
How should I change the method's contents to retry a non-transient ServiceBusException? Please help provide the code where the TODOs are below:
private async Task MyMessageHandlerAsync(EventArgs eventArgs)
{
var args = (ProcessMessageEventArgs)eventArgs;
try
{
var body = args.Message.Body.ToString();
// ...
// process body
// ...
await args.CompleteMessageAsync(args.Message);
}
catch (ServiceBusException sbe)
{
if (sbe.IsTransiet)
{
// TODO: Is it correct that the exponential retry will work
// here? The one defined in the ServiceBusClient.
// So, no code is needed here, just throw.
throw;
}
else
{
// TODO: for non-transient, this is where the
// options in the ServiceBusClient don't apply.
// Is that correct? How do I do an
// exponential retry here?
}
}
catch (Exception e)
{
// TODO: same problem as else in first catch.
}
}
ServiceBusRetryOptions is intended to be used by the ASB client when there are transient errors that are not bubbled up to your code right away, i.e. an internal retry mechanism built into the client to perform retries on your behalf before exception is raised.
Use retry policy to specify to the ASB client how to deal with transient errors prior to giving up, not how many times a message handler throws error:
I don't use transactions in my C# .NET Core v3.1 with EFCore v3 code explicitly and all works fine.
Except for my Azure Webjob. It listens to a queue. When multiple messages are on the queue and thus the function gets called multiple times in parallel I get transaction errors.
My webjob reads a file from the storage and saves the content to a database table.
I also use the Sharding mechanism: each client has its own database.
I tried using TransactionScope but then I get other errors.
Examples I found use the TransactionScope and opening the connection and doing the saving in one method. I have those parts split into several methods making it unclear to me how to use the TransactionScope.
Here's some code:
ImportDataService.cs:
//using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);
await using var tenantContext = await _tenantFactory.GetContextAsync(clientId, true);
await tenantContext.Foo.AddRangeAsync(dboList, cancellationToken);
await tenantContext.SaveChangesAsync(cancellationToken);
//scope.Complete();
TenantFactory.cs:
public async Task<TenantContext> GetContextAsync(int tenantId, bool lazyLoading = false)
{
_tenantConnection = await _sharding.GetTenantConnectionAsync(tenantId);
var optionsBuilder = new DbContextOptionsBuilder<TenantContext>();
optionsBuilder.UseLoggerFactory(_loggerFactory);
if (lazyLoading) optionsBuilder.UseLazyLoadingProxies();
optionsBuilder.UseSqlServer(_tenantConnection,
options => options.MinBatchSize(5).CommandTimeout(60 * 60));
return new TenantContext(optionsBuilder.Options);
}
This code results in SqlConnection does not support parallel transactions.
When enabling TransactionScope I get this error: This platform does not support distributed transactions.
In my ConfigureServices I have
services.AddSingleton<IImportDataService, ImportDataService>();
services.AddTransient <ITenantFactory, TenantFactory>();
services.AddTransient <IShardingService, ShardingService>();
I also tried AddScoped but no change.
Edit: Additional code
ShardingService.cs
public async Task<SqlConnection> GetTenantConnectionAsync(int tenantId)
{
SqlConnection tenantConnection;
try
{
tenantConnection = await _clientShardMap.OpenConnectionForKeyAsync(tenantId, _tenantConnectionString, ConnectionOptions.Validate);
}
catch (Exception e)
{
_logger.LogDebug($"Error getting tenant connection for key {tenantId}. Error: " + e.Message);
throw;
}
if (tenantConnection == null) throw new ApplicationException($"Cannot get tenant connection for key {tenantId}");
return tenantConnection;
}
When the WebJob gets triggered it reads a record from a table. The ID of the record is in the queue message. Before processing the data it first changes the status to processing and when the data is processed it changes the status to processed or error:
var fileImport = await _masterContext.FileImports.FindAsync(fileId);
fileImport.Status = Status.Processing;
await _masterContext.SaveChangesAsync();
if (await _fileImportService.ProcessImportFile(fileImport))
fileImport.Status = Status.Processed;
await _masterContext.SaveChangesAsync();
I made a simple app where chunks of a file are streamed from client to server. Server-side I have handled exceptions such that a response of my own is returned to the client. When I throw an exception before reading from the stream has been completed, however, even though it gets caught and a custom response is returned, client-side I still get an unhandled RpcException with status Cancelled.
public override async Task<UploadFileResponse> UploadFile(
IAsyncStreamReader<UploadFileRequest> requestStream,
ServerCallContext context)
{
try
{
bool moveNext = await requestStream.MoveNext();
using (var stream = System.IO.File.Create($"foo.txt"))
{
while (moveNext)
{
// If something goes wrong here, before the stream has been fully read, an RpcException
// of status Cancelled is caught in the client instead of receiving an UploadFileResponse of
// type 'Failed'. Despite the fact that we catch it in the server and return a Failed response.
await stream.WriteAsync(requestStream.Current.Data.ToByteArray());
moveNext = await requestStream.MoveNext();
throw new Exception();
}
// If something goes wrong here, when the stream has been fully read, we catch it and successfully return
// a response of our own instead of an RpcException.
// throw new Exception();
}
return new UploadFileResponse()
{
StatusCode = UploadStatusCode.Ok
};
}
catch (Exception ex)
{
return new UploadFileResponse()
{
Message = ex.Message,
StatusCode = UploadStatusCode.Failed
};
}
}
Perhaps the way I approach implementing this operation is wrong. I can see why the server would return a Cancelled RPC exception because we indeed cancel the call before the stream has been fully read but I don't understand why it overrides the custom response. It might be that handling both would have to be done client-side - a failed response and a potential RPC exception.
I found some materials on the topic - Server and Client.
Apparently it is common to throw RpcExceptions whenever there should be an invalid response as also shown in the official gRPC Github repository here.
I am using a Kestrel based server application with ASP.net core 2.1. I have a custom error handling middleware like this:
public class ErrorHandlingMiddleware
{
private readonly RequestDelegate next;
public ErrorHandlingMiddleware(RequestDelegate next)
{
this.next = next;
}
public async Task Invoke(HttpContext context /* other dependencies */)
{
try
{
await next(context);
}
catch (Exception ex)
{
await HandleExceptionAsync(context, ex);
}
}
private static Task HandleExceptionAsync(HttpContext context, Exception exception)
{
Log.Warning(exception,"Exception occurred: {exception}",exception);
var code = HttpStatusCode.InternalServerError; // 500 if unexpected
var result = JsonConvert.SerializeObject(new { error = exception.Message });
context.Response.ContentType = "application/json";
context.Response.StatusCode = (int)code;
return context.Response.WriteAsync(result);
}
}
It seems to work in 99% of the cases, but every now, and then the server process stops, and I see some exception as last logged entry. Unfortunately, I haven't been able to reproduce this on my development machine, it only appears on the production system. In my understanding this should not happen in any case.
Are there any known errors I could make to make the server stop? Is there anything I could enable for diagnostics?
The stacktraces of the logged exceptions usually indicate some issue with the input or things which I would like to report using the ErrorHandlingMiddleware.
Are you using Windows or Liunx? If using Windows you should be able to capture a crash dump on process crash using WER (Windows Error Reporting) https://michaelscodingspot.com/how-to-create-use-and-debug-net-application-crash-dumps-in-2019/#Automatically-create-dump-on-Crash.
On Linux you can do this https://www.cyberciti.biz/tips/linux-core-dumps.html
That should let you collect a crash dump and you can analyze it to see where the crash is coming from.
Generally we catch all exceptions that happen during requests. Crashing the process usually means:
An exception thrown from:
an async void method in your code
A background thread
A stackoverflow exception
After trawling the internet for hours, I'm lost on how to solve my problem for ASP.NET Core 2.x.
I am generating a CSV on the fly (which can take several minutes) and then trying to send that back to the client. Lots of clients are timing out before I start sending a response, so I am trying to stream the file back to them (with an immediate 200 response) and write to the stream asynchronously. It seemed like this was possible with PushStreamContent previously in ASP, but I'm unsure how to structure my code so the CSV generation is done asynchronously and returning an HTTP response immediately.
[HttpGet("csv")]
public async Task<FileStreamResult> GetCSV(long id)
{
// this stage can take 2+ mins, which obviously blocks the response
var data = await GetData(id);
var records = _csvGenerator.GenerateRecords(data);
// using the CsvHelper Nuget package
var stream = new MemoryStream();
var writer = new StreamWriter(stream);
var csv = new CsvWriter(writer);
csv.WriteRecords(stream, records);
await writer.FlushAsync();
return new FileStreamResult(stream, new MediaTypeHeaderValue("text/csv))
{
FileDownloadName = "results.csv"
};
}
If you make a request to this controller method, you'll get nothing until the whole CSV has finished generating and then you finally get a response, by which point most client requests have timed out.
I've tried wrapping the CSV generation code in a Task.Run() but that has not helped my issue either.
There isn't a PushStreamContext kind of type built-in to ASP.NET Core. You can, however, build your own FileCallbackResult which does the same thing. This example code should do it:
public class FileCallbackResult : FileResult
{
private Func<Stream, ActionContext, Task> _callback;
public FileCallbackResult(MediaTypeHeaderValue contentType, Func<Stream, ActionContext, Task> callback)
: base(contentType?.ToString())
{
if (callback == null)
throw new ArgumentNullException(nameof(callback));
_callback = callback;
}
public override Task ExecuteResultAsync(ActionContext context)
{
if (context == null)
throw new ArgumentNullException(nameof(context));
var executor = new FileCallbackResultExecutor(context.HttpContext.RequestServices.GetRequiredService<ILoggerFactory>());
return executor.ExecuteAsync(context, this);
}
private sealed class FileCallbackResultExecutor : FileResultExecutorBase
{
public FileCallbackResultExecutor(ILoggerFactory loggerFactory)
: base(CreateLogger<FileCallbackResultExecutor>(loggerFactory))
{
}
public Task ExecuteAsync(ActionContext context, FileCallbackResult result)
{
SetHeadersAndLog(context, result, null);
return result._callback(context.HttpContext.Response.Body, context);
}
}
}
Usage:
[HttpGet("csv")]
public IActionResult GetCSV(long id)
{
return new FileCallbackResult(new MediaTypeHeaderValue("text/csv"), async (outputStream, _) =>
{
var data = await GetData(id);
var records = _csvGenerator.GenerateRecords(data);
var writer = new StreamWriter(outputStream);
var csv = new CsvWriter(writer);
csv.WriteRecords(stream, records);
await writer.FlushAsync();
})
{
FileDownloadName = "results.csv"
};
}
Bear in mind that FileCallbackResult has the same limitations as PushStreamContext: that if an error occurs in the callback, the web server has no good way of notifying the client of that error. All you can do is propagate the exception, which will cause ASP.NET to clamp the connection shut early, so clients get a "connection unexpectedly closed" or "download aborted" error. This is because HTTP sends the error code first, in the header, before the body starts streaming.
If document generation takes 2+ minutes it should be asynchronous. It could be like this:
client sends request to generate document
you accept request, start generation in background and reply with message like generation has been started, we will notify you
on client you periodically check whether document is ready and get the link finally
You also can do it with signalr. Steps are the same but it's not needed for client to check the status of document. You can push the link when document is completed.