I don't use transactions in my C# .NET Core v3.1 with EFCore v3 code explicitly and all works fine.
Except for my Azure Webjob. It listens to a queue. When multiple messages are on the queue and thus the function gets called multiple times in parallel I get transaction errors.
My webjob reads a file from the storage and saves the content to a database table.
I also use the Sharding mechanism: each client has its own database.
I tried using TransactionScope but then I get other errors.
Examples I found use the TransactionScope and opening the connection and doing the saving in one method. I have those parts split into several methods making it unclear to me how to use the TransactionScope.
Here's some code:
ImportDataService.cs:
//using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);
await using var tenantContext = await _tenantFactory.GetContextAsync(clientId, true);
await tenantContext.Foo.AddRangeAsync(dboList, cancellationToken);
await tenantContext.SaveChangesAsync(cancellationToken);
//scope.Complete();
TenantFactory.cs:
public async Task<TenantContext> GetContextAsync(int tenantId, bool lazyLoading = false)
{
_tenantConnection = await _sharding.GetTenantConnectionAsync(tenantId);
var optionsBuilder = new DbContextOptionsBuilder<TenantContext>();
optionsBuilder.UseLoggerFactory(_loggerFactory);
if (lazyLoading) optionsBuilder.UseLazyLoadingProxies();
optionsBuilder.UseSqlServer(_tenantConnection,
options => options.MinBatchSize(5).CommandTimeout(60 * 60));
return new TenantContext(optionsBuilder.Options);
}
This code results in SqlConnection does not support parallel transactions.
When enabling TransactionScope I get this error: This platform does not support distributed transactions.
In my ConfigureServices I have
services.AddSingleton<IImportDataService, ImportDataService>();
services.AddTransient <ITenantFactory, TenantFactory>();
services.AddTransient <IShardingService, ShardingService>();
I also tried AddScoped but no change.
Edit: Additional code
ShardingService.cs
public async Task<SqlConnection> GetTenantConnectionAsync(int tenantId)
{
SqlConnection tenantConnection;
try
{
tenantConnection = await _clientShardMap.OpenConnectionForKeyAsync(tenantId, _tenantConnectionString, ConnectionOptions.Validate);
}
catch (Exception e)
{
_logger.LogDebug($"Error getting tenant connection for key {tenantId}. Error: " + e.Message);
throw;
}
if (tenantConnection == null) throw new ApplicationException($"Cannot get tenant connection for key {tenantId}");
return tenantConnection;
}
When the WebJob gets triggered it reads a record from a table. The ID of the record is in the queue message. Before processing the data it first changes the status to processing and when the data is processed it changes the status to processed or error:
var fileImport = await _masterContext.FileImports.FindAsync(fileId);
fileImport.Status = Status.Processing;
await _masterContext.SaveChangesAsync();
if (await _fileImportService.ProcessImportFile(fileImport))
fileImport.Status = Status.Processed;
await _masterContext.SaveChangesAsync();
Related
I have a web job in my Azure web app that writes data to an Azure Cosmos instance. This web is triggered from a storage queue. Each trigger spawns a new process to do one insert or one update to the Cosmos instance. With the amount of data coming into that queue, the web job inserts/updates the Azure Cosmos instance around 1000 times every minute.
In a separate, user-facing portal, the users query data from this Azure Cosmos instance. We have been getting a high number of these errors from that public-facing portal:
Only one usage of each socket address (protocol/network address/port) is normally permitted <>
An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
To me, this is indicative of SNAT port exhaustion. All documentation and help information on this subject, and on these specific error messages point to "ensuring that we are re-using connections to the Cosmos instance", and that we are using best practices. I "believe" we are re-using connections to the Azure Cosmos instance properly, but I am not sure. This is the code:
Program.cs
using Microsoft.Extensions.Hosting;
internal class Program
{
private static async Task Main(string[] args)
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageQueues();
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
}
Functions.cs
namespace WebhookMessageProcessor
{
public class RingCentralMessageProcessor
{
private static List<KeyValuePair<string, CosmosClient>> cosmosClients = new List<KeyValuePair<string, CosmosClient>>();
public async static void ProcessQueueMessage([QueueTrigger("<<storage-queue-name>>")] string message, ILogger logger)
{
var model = Newtonsoft.Json.JsonConvert.DeserializeObject<WebHookHandlerModel>(message);
//the intention here is to maintain a list of cosmos clients, as each message from the queue indicates which Cosmos instance to update/insert the data to. For now, however, all messages are going to a single instance. More will be added later.
if (cosmosClients == null) cosmosClients = new List<KeyValuePair<string, CosmosClient>>();
await HandleCallData(model.ownerId, model.body, storageConnectionString);
}
public async static Task HandleCallData(string ownerId, string deserializedData, string storageConnectionString)
{
var model = Newtonsoft.Json.JsonConvert.DeserializeObject<PushModel>(deserializedData);
if (model == null || model.body == null || model.body.sessionId == null)
{
//log error
}
else
{
//the intention here is to maintain a list of cosmos clients, as each message from the queue indicates which Cosmos instance to update/insert the data to. For now, however, all messages are going to a single instance. More will be added later.
var cosmosClient = null;
if (!cosmosClients.Any(x => x.Key == ownerId))
{
cosmosClient = new CosmosClient(cosmosConfig.accountEndpoint, cosmosConfig.accountKey);
cosmosClients.Add(new KeyValuePair<string, CosmosClient>(ownerId, cosmosClient));
}
else
{
cosmosClient = cosmosClients.First(x => x.Key == ownerId).Value;
}
//data building logic here
//...
var cosmosContainer = cosmosClient.GetContainer(cosmosConfig.databaseId, cosmosConfig.containerId);
string etag = null;
if (condition1) // THEN INSERT
{
var task = await cosmosContainer.CreateItemAsync(call, partitionKey: new PartitionKey(partitionKey), requestOptions: new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
else if (condition2) // THEN FIND AND REPLACE
{
var response = await cosmosContainer.ReadItemAsync<CallIndex>(call.id, new PartitionKey(partitionKey));
var existingCallIndex = response.Resource;
etag = response.ETag;
await cosmosContainer.ReplaceItemAsync(existingCallIndex, call.id, new PartitionKey(partitionKey), new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
else // FIND AND REPLACE BY DEFAULT
{
var response = await cosmosContainer.ReadItemAsync<CallIndex>(call.id, new PartitionKey(partitionKey));
var existingCallIndex = response.Resource;
etag = response.ETag;
await cosmosContainer.ReplaceItemAsync(existingCallIndex, call.id, new PartitionKey(partitionKey), new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
}
catch (Exception ex)
{
//handle exception here
}
curTries++;
} while (!success && curTries < maxTries);
}
}
}
}
I am maintaining a list of cosmos clients in a static variable, as the content of the message may indicate writing to a different cosmos instance. However, as of now, there is only one instance, and all data is going to that single instance. There will be more instances in the future. Is this a good/correct way to reuse connections to the Cosmos instance in my web job?
Thanks
This can be technically achieved but there are trade-offs you need to make. Mainly latency (you can't have an unbounded list of Cosmos Clients on Direct mode).
The key of Dictionary should be the account name, that way you don't end up creating multiple clients for the same account even if the "owner" is different. There should be a Singleton client per account your application interacts with.
You should put your client on Gateway mode. This should use less ports, have higher potential latency, but there is no scenario where you can have an unbounded number of client instances on Direct mode, that simply will almost always hit your connection limit. Example on how to change the connection mode.
You are using a List, that is neither concurrent nor handles eviction. You should dispose clients that are not used after some time or define a max number of clients you can handle, it's impossible to write an app that handles an unbounded/infinite number of clients. Maybe MemoryCache is a good option. But you need to define a limit or make sure you can distribute across multiple machines/instances.
Putting Cosmos clients in a List will never work as you can't pool connections for different clients pointing at different accounts. Your single client instance here is likely hitting the 128 port max for your WebJob. For Cosmos you should use a single client per instance. You should also cache the container references too. Not doing this will cause 429s on the master partition (stores all your account meta data) in Cosmos DB due to all the meta data requests that will happen at larger request volumes.
Take a look at this article here on Singleton client, container reference caching and PortReuseMode
Best Practices for .NET SDK
Also see here for Networking Performance Tips for .NET SDK v3
I am implementing a set of health check to a .net core 3.1 application with the AspNetCore.HealthCheck nuget package. Some of the health check have to reach a EFcore database to check if some data updated by other systems are present to validate other processes have run properly.
When implementing one health check for this everything works great, but as soon as I implement the second health check which does more or less the same, with a few variants, I get a threading issue as the first call to the EF core has not completed before the next arrives.
The EF core code from the repository
public async Task<IEnumerable<EstateModel>> ListEstates(string customerId)
{
try
{
var estates = _productDbContext.Estates.AsNoTracking().Where(p => p.CustomerId == customerId)
.Include(e => e.Meters)
.ThenInclude(m => m.Counters)
.Include(e => e.Installations);
var entities = await estates.ToListAsync().ConfigureAwait(false);
return _mapper.Map<List<EstateModel>>(entities);
}
catch (Exception ex)
{
Log.Error($"Error listing estate by customer: {customerId}", ex);
}
return null;
}
An example of the health check
public async Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = new CancellationToken())
{
var configs = new List<ConsumptionHealthCheckConfig>();
_configuration.GetSection("HealthCheckSettings:GetConsumptionGas").Bind(configs);
foreach (var config in configs)
{
try
{
return await _healthService.CheckConsumptionHealth(config, false, false);
}
catch(Exception ex)
{
return new HealthCheckResult(HealthStatus.Unhealthy, $"An error occurred while getting consumption for {config.Detailed.InstallationNumber} {ex}", ex);
}
}
return new HealthCheckResult(HealthStatus.Healthy);
}
The healthservice method
public async Task<HealthCheckResult> CheckConsumptionHealth(ConsumptionHealthCheckConfig config, bool isWater, bool isHeating)
{
if ((config.Detailed?.InstallationNumber ?? 0) != 0 && (config.Detailed?.MeterNumber ?? 0) != 0)
{
var estates = await _estateService.GetEstates(config.Detailed.CustomerNo);
Rest is omitted...
The AddHealthChecks in Configure services
internal static void Configure(IConfiguration configuration, IServiceCollection services)
{
services.AddHealthChecks()
//Consumption
.AddCheck<GetConsumptionElectricityHealthCheck>("Consumption Electricity", failureStatus: HealthStatus.Unhealthy, tags: new[] {"Consumption"})
.AddCheck<GetConsumptionWaterHealthCheck>("Consumption Water", failureStatus: HealthStatus.Unhealthy, tags: new[] {"Consumption"})
The exception that I'm getting is
A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
and when looking at the link provided, it states that I should always await any calls the database immediately, which we clearly do.
I have tried moving the GetEstates part to the health check itself instead of my service, but then I get an issue where trying to reach the database while it is being configured.
So my problem arrises when these consumption health checks all reach the EF core at the same time, but I cannot see how to circumvent that from happening as there are no apparent options to tell the health checks to run in sequence or if I implement a butt-ugly Thread.Sleep and as far as I know, it shouldn't be necessary to implement thread locking on top of EF Core or am I incorrect?
Any help will be greatly appreciated!
As discussed in this issue, all health checks use the same service scope and run in parallel. I'd recommend that you create a new service scope inside any health check that accesses your DbContext.
public virtual async Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default(CancellationToken))
{
using var scope = serviceProvider.CreateScope();
var healthService = scope.ServiceProvider.GetRequiredService<...>();
...
}
I'm just starting out with async and Task's and my code has stopped processing. It happens when I have an incoming network packet and I try and communicate with the database inside the packet handler.
public class ClientConnectedPacket : IClientPacket
{
private readonly EntityFactory _entityFactory;
public ClientConnectedPacket(EntityFactory entityFactory)
{
_entityFactory= entityFactory;
}
public async Task Handle(NetworkClient client, ClientPacketReader reader)
{
client.Entity = await _entityFactory.CreateInstanceAsync( reader.GetValueByKey("unique_device_id"));
// this Console.WriteLine never gets reached
Console.WriteLine($"Client [{reader.GetValueByKey("unique_device_id")}] has connected");
}
}
The Handle method gets called from an async task
if (_packetRepository.TryGetPacketByName(packetName, out var packet))
{
await packet.Handle(this, new ClientPacketReader(packetName, packetData));
}
else
{
Console.WriteLine("Unknown packet: " + packetName);
}
Here is the method which I think is causing the issue
public async Task<Entity> CreateInstanceAsync(string uniqueId)
{
await using (var dbConnection = _databaseProvider.GetConnection())
{
dbConnection.SetQuery("SELECT COUNT(NULL) FROM `entities` WHERE `unique_id` = #uniqueId");
dbConnection.AddParameter("uniqueId", uniqueId);
var row = await dbConnection.ExecuteRowAsync();
if (row != null)
{
return new Entity(uniqueId, false);
}
}
return new Entity(uniqueId,true);
}
DatabaseProvider's GetConnection method:
public DatabaseConnection GetConnection()
{
var connection = new MySqlConnection(_connectionString);
var command = connection.CreateCommand();
return new DatabaseConnection(_logFactory.GetLogger(), connection, command);
}
DatabaseConnection's constructor:
public DatabaseConnection(ILogger logger, MySqlConnection connection, MySqlCommand command)
{
_logger = logger;
_connection = connection;
_command = command;
_connection.Open();
}
When I comment out this line, it reaches the Console.WriteLine
_connection.Open();
I ran a POC project spinning 100 parallel tasks both with MySql.Data 8.0.19 and MySqlConnector 0.63.2 on .NET Core 3.1 console application. I create, open and dispose the connection into the context of every single task. Both providers runs to completion without errors.
The specifics are that MySql.Data queries run synchronously although the library provide async methods signature e.g. ExecuteReaderAsync() or ExecuteScalarAsync(), while MySqlConnector run truly asynchronously.
You may be running into:
a deadlock situation not specifically related to the mysql provider
not properly handling exceptions inside your tasks (you may inspect the task associated aggregate exception and also monitor mysql db logs)
you execution be still blocked (not returning result) when you assume it’s not working, if you running a high number of parallel tasks with MySql.Data as it executes synchronously
Multi-threading with MySQL must use independent connections. Given that, multithreading is not a MySQL question but an issue for the client language, C# in your question.
That is, build your threads without regard to MySQL, then create a connection in each thread that needs to do queries. It will be on your shoulders if you need to pass data between the threads.
I usually find that optimizing queries eliminates the temptation to multi-thread my applications.
I am faced with a peculiar async problem which I can reproduce easily but cannot understand.
My Current Setup
I have a WCF Service which exposes two API's - API1 and API2. Both the service contracts are synchronous. API1, looks up a dictionary in memory, then creates a task using Task.Factory.StartNew to create a new task which fetches data from a SQL server, compares it with the data from the dictionary and writes some logs. In case the SQl Server has connectivity issues, this re-tries SqlConnection.OpenAsync 3 more times. Note that the API call itself returns as soon as it has the data from the dictionary (does not wait for SQl operation to complete)
API2 is much simpler, it just calls a stored procedure on SQL server, gets the data and returns.
The code to open connection is as follows:
public static int OpenSqlConn(SqlConnection connection)
{
return OpenSqlConn(connection).Result;
}
public async static Task<int> OpenSqlConnAsync(SqlConnection connection)
{
return await OpenConnAsync(connection);
}
private static async Task<int> OpenConnAsync(SqlConnection connection)
{
int retryCounter = 0;
TimeSpan? waitTime = null;
while (true)
{
if (waitTime.HasValue)
{
await Task.Delay(waitTime.Value).ConfigureAwait(false);
}
try
{
startTime = DateTime.UtcNow;
await connection.OpenAsync().ConfigureAwait(false);
break;
}
catch (Exception e)
{
if (retryCounter >= 3)
{
SafeCloseConnection(connection);
return retryCounter;
}
retryCounter++;
waitTime = TimeSpan.FromSeconds(6);
}
}
return retryCounter;
}
The API1 code looks like below:
public API1Response API1 (API1Request request)
{
// look up in memory dictionary for the request
API1Response response = getDataFromDictionary(request);
// create a task to get some data from DB
Action action = () =>
{
GetDataFromDb(request);
}
Task.Factory.StartNew(action).ConfigureAwait(false);
// this is called immediately even if DB is not available and above task is retrying.
return API1Response;
}
public void GetDataFromDb(API1Request request)
{
using (var connection = new SqlConnection(...))
{
OpenSqlConn(connection);
/// hangs for long even if db is available
ReadDataFromDb(connection);
}
}
public API2Response API2(API2REquest request)
{
return GetDataFromDbForAPI2(request)
}
public API2Response GetDataFromDbForAPI2(API2Request request)
{
using (var connection = new SqlConnection(...))
{
OpenSqlConn(connection); /// hangs for long even if db is available
ReadDataFromDb(connection);
}
}
The Problem
The service runs into the following problem when the SQL Server is unavailable even for short periods of time, and some client makes just 100 calls to API1:
When my SQL server has connectivity issues, and I get around 100 calls of API1, even though API1 returns to the caller, it has created 100 tasks that will try to open a connection to the bad DB. Each of those tasks hangs in a retry look for some time (which is expected). In my experiments, I can simulate a DB unavailability by using a bad connection string for API1.
Now let's say the DB is back up again and a call to API2 is made to the service. What I find is that when API2 call reaches the OpenAsync portion above, it hangs. Just hangs :(
Some observations
1. When I look at the 'Parallel Stacks' from Visual Studio, I find that there are 100 threads with the API1 stack doing the following stack :
ManualResetEvenSlim.Wait()
Task.SpinThenBlockingWait
Task.InternalWait();
Task<>.GetREsultCore
OpenConn()
There is 1 thread with the API2 stack, which again is in a similar stack as above.
However, if I replace SqlConnection.OpenAsync with SqlConnection.Open(), API2 call returns immediately.
Need Help
What I would like to understand is why does the API2, which can open a DB connection (because DB is available at that time), also hang on OpenAsync. Is there any obvious synchronization issue that I am seeing? When i change SqlConnection.OpenAsync() to SqlConnection.Open() why does the behavior change?
I am using this tutorial in order to connect a xamarin.forms app with easy tables. I cannot add data to the database in Azure as i get
System.InvalidOperationException
The error message is the following
An insert operation on the item is already in the queue.
The exception happends in the following line of code.
await usersTable.InsertAsync(data);
In order to add a user
var user = new User { Username = "username", Password = "password" };
bool x = await AddUser(user);
AddUser
public async Task<bool> AddUser(User user)
{
try
{
await usersTable.InsertAsync(user);
await SyncUsers();
return true;
}
catch (Exception x)
{
await new MessageDialog(x.Message.ToString()).ShowAsync();
return false;
}
}
SyncUsers()
public async Task SyncUsers()
{
await usersTable.PullAsync("users", usersTable.CreateQuery());
await client.SyncContext.PushAsync();
}
where
IMobileServiceSyncTable<User> usersTable;
MobileServiceClient client = new MobileServiceClient("url");
Initialize
var path = Path.Combine(MobileServiceClient.DefaultDatabasePath, "DBNAME.db");
var store = new MobileServiceSQLiteStore(path);
store.DefineTable<User>();
await client.SyncContext.InitializeAsync(store, new MobileServiceSyncHandler());
usersTable = client.GetSyncTable<User>();
Please check your table. You probably have added the item already. Also, I would suggest that you don't set the Id property for your entity, because you might be inserting a same ID that's already existing in your table. It's probably the reason why the exception is appearing.
Hope it helps!
Some debugging you can do:
1) Turn on diagnostic logging in the backend and debug the backend: https://adrianhall.github.io/develop-mobile-apps-with-csharp-and-azure/chapter8/developing/#debugging-your-cloud-mobile-backend
2) Add a logging delegating handler in your MobileServiceClient setup: https://adrianhall.github.io/develop-mobile-apps-with-csharp-and-azure/chapter3/server/#turning-on-diagnostic-logs
The MobileServicePushFailedException contains an inner exception that contains the actual error. Normally, it is one of the 409/412 HTTP errors, which indicates a conflict. However, it can also be a 404 (which means there is a mismatch between what your client is asking for and the table name in Easy Tables) or 500 (which means the server crashed, in which case the server-side diagnostic logs indicate why).
Easy Tables is just a Node.js service underneath the covers.