Multiple API calls competing for one atomic db action - c#

I have an API to send message and create contact as such: when it is called and detected no such contact in the database, it first creates a contact, then send out the message.
The problem is: when multiple API calls are issued concurrently (sending message to the same receiver), multiple contacts of the same receiver will be created. (race condition)
Say A, B and C are issued concurrently, when A is still createing a contact, B and C already detect there is no such contact in the contact list, hence B and C would also create the contact.
Any clue on how to solve such race condition? I used azure sql database, .net6, EF core framework, etc.
I tried using Semaphore, but Semaphore is intraprocess, multiple API calls are interprocess.
I also tried using named mutex
_mutex = new Mutex(false, "Global\\MyUniqueMutexName", out bool hasInitialOwnership);
try
{
if (!hasInitialOwnership)
{
_mutex.WaitOne();
}
res = await CriticalAction(...params...);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
finally
{
if (hasInitialOwnership)
{
_mutex.ReleaseMutex();
}
}
but doesn't work

Related

Entity Framework Core - transaction cannot be roll back after commit

It is using Entity Framework Core to update database.
dbContextTransaction.Commit(); is working fine, and after this it is some file operation with bad result. And then it throws an error, so it tries to roll back using dbContextTransaction.Rollback(); but results in:
This SqlTransaction has completed; it is no longer usable.
DbContext dbContext = scope.ServiceProvider.GetService<DbContext>();
IDbContextTransaction dbContextTransaction = dbContext.Database.BeginTransaction();
try
{
IOrderDao orderDao = scope.ServiceProvider.GetService<IOrderDao>();
IItemSoldPriceDao itemSoldPriceDao = scope.ServiceProvider.GetService<IItemSoldPriceDao>();
ItemSoldPrice itemSoldPrice = new ItemSoldPrice
{
...
};
itemSoldPriceDao.AddItemSoldPrice(itemSoldPrice);
order.SoldPriceCaptured = true;
dbContext.SaveChanges();
dbContextTransaction.Commit();
//... some other file operation throws out error
throw new Exception("aaa");
}
catch (Exception ex)
{
CommonLog.Error("CaptureSoldPrice", ex);
dbContextTransaction.Rollback();
}
After a transaction is committed, it cannot be rolled back?
When using Entity Framework, explicit transactions are only required when you want to link the success or failure of operations against the DbContext with other operations outside of the scope of the DbContext. All operations within a DbContext prior to SaveChanges are already grouped within a transaction. So for instance saving entities across two or more tables within a DbContext do not require setting up an explicit transaction, they will both be committed or rolled back together if EF cannot save one or the other.
When using an explicit transaction, the Commit() call should be the last operation for what forms essentially a unit of work. It will be the last operation to determine whether everything in the transaction scope is successful or not. So as a general rule, all operations, whether Database-based, file based, or such should register with and listen to the success or failure of the transaction.
An example of using a transaction: Say we have a system that accesses two databases via two separate DbContexts. One is an order system that tracks orders and has a record for a Customer and one is a CRM that tracks customer information. When we accept a new order from a new customer we check the CRM system for a customer and create a customer record in both systems if it is someone new.
using (var orderContext = new OrderDbContext())
{
var transaction = orderContext.Database.BeginTransaction();
try
{
var order = new Order
{
// TODO: Populate order details..
}
if(customerDetails == null && registerNewCustomer) // Where customerDetails = details loaded if an existing customer logged in and authenticated...
{
using(var crmContext = new CrmDbContext())
{
crmContext.Database.UseTransaction(transaction);
var customer = new Customer
{
UserName = orderDetails.CustomerEmailAddress,
FirstName = orderDetails.CustomerFirstName,
LastName = orderDetails.CustomerLastName,
Address = orderDetails.BillingAddress
};
crmContext.Customers.Add(customer);
crmContext.SaveChanges();
var orderCustomer = new Orders.Customer
{
CustomerId = customer.CustomerId,
FirstName = customer.FirstName,
LastName = customer.LastName
}
orderContext.Customers.Add(orderCustomer);
}
}
order.CustomerId = crmContext.Customers
.Select(c => c.CustomerId)
.Single(c => c.UserName == customerDetails.UserName);
orderContext.Orders.Add(order);
orderContext.SaveChanges();
transaction.Commit();
}
catch(Exception ex)
{
// TODO: Log exception....
transaction.Rollback();
}
}
The order DB customer is just a thin wrapper of the CRM customer where we would go for all of the customer details. The CRM customer manages the Customer IDs which would correlate to an Order Customer record. This is by no means a production code type example, but rather just to outline how a Transaction might coordinate multiple operations when used.
In this way if there is any exception raised at any point, such as after a new Customer record has been created and saved, all saved changes will be rolled back and we can inspect any logged exception details along with recorded values to determine what went wrong.
When dealing with combinations of DbContext operations and other operations that we might want to support a rolling back process flow on failure then we can leverage constructs like the TransactionScope wrapper. However this should be used with caution and only in cases where you explicitly need to marry these operations rather than attempting to use the pattern as a standard across all operations. In most cases you will not need explicit transactions.

How to avoid receiving duplicate request in controller

I have build WebApi service(MvcApplication) that recive UserName ,and Email.
i have url to this service:
Ex: www.domain.com/Controller/SendUser
And i have some affiliates that use this URL to my service in they website/landing page,and they send me data with Username,and Email of potential client.
Some affiliates have build they submit form wrong,and allow to users press SEND button many times before they see response on the screen or redirect to next page.And here where the problem appears.
I getting 5-10 request to my service with duplicate data,and start to run all my validation function and methods insert to database.
IMPORTANT:
I don't wont solution on database level,i want to stop the request in the beginning,don't even start all validation services.
I need to receive request,temporary save UserName+Email,and if i receiving in the same second or in next 10 seconds the same UserName+Email just to avoid it.
I tries to add static dictionary to Global.asax and save EncodedLead(from User+Email),I lock the my dictionary GlobalMemoryLeads before i check if ContainsKey(X),and than i add a key,but some how i still get error that key is all ready exist even when dictionary is lock.
It seems that threads go throw lock and try to add the same key even when
GlobalMemoryLeads.ContainsKey(EncodedLead) return false,sow another thread can add key to dictionary that is locked by another thread??What i am missing here?
How to avoid duplicate requests?
UPDATED
My code:
[AcceptVerbs(WebRequestMethods.Http.Get, WebRequestMethods.Http.Post)]
[AllowCrossSiteJson]
public ActionResult SendUser(string UserName, string Email, )
{
string response = "";
bool duplicate=false;
try
{
string Lead = UserName + email;
int EncodedLead = Lead.GetHashCode();
//here i lock my GlobalMemoryLeads dictionary
lock (MvcApplication.GlobalMemoryLeads)
{
//here i check if that key is all ready ContainsKey
if (!MvcApplication.GlobalMemoryLeads.ContainsKey(EncodedLead))
{
try
{
MvcApplication.GlobalMemoryLeads.Add(EncodedLead, false);
}
catch (Exception ex)
{
//here i get error that key is all ready exist
//how it possible if i have
//1: Globalizes lock
//2 i check ContainsKey before
return Json(new { respondNotSuccess = "Duplicate Lead" });
}
}
}
}
catch (Exception ex)
{
response = "Exception " + ex.Message;
}
return Json(new { respondSuccess = response });
}
In general, avoid locking on a public type, or instances beyond your code's control.
The common constructs lock (this), lock (typeof (MyType)), and lock ("myLock") violate this guideline:
lock (this) is a problem if the instance can be accessed publicly.
lock (typeof (MyType)) is a problem if MyType is publicly accessible.
lock ("myLock") is a problem because any other code in the process using the same string, will share the same lock.
Best practice is to define a private object to lock on, or a private static object variable to protect data common to all instances
try to do something like that:
private Object thisLock = new Object();
lock (thisLock)
{
if (!MvcApplication.GlobalMemoryLeads.ContainsKey(EncodedLead))
{
try
{
MvcApplication.GlobalMemoryLeads.Add(EncodedLead, false);
...

If another request is running a method wait until finish

I'm developing an ASP.NET Web API application with C#, .NET Framework 4.7 and MongoDb.
I have this method:
[HttpPut]
[Route("api/Public/SendCommissioning/{serial}/{withChildren}")]
public HttpResponseMessage SendCommissioning(string serial, bool withChildren)
{
string errorMsg = "Cannot set commissioning.";
HttpResponseMessage response = null;
bool serverFound = true;
try
{
[...]
// Mongo
MongoHelper mgHelper = new MongoHelper();
mgHelper.InsertCommissioning(serial, withChildren);
}
catch (Exception ex)
{
_log.Error(ex.Message);
response = Request.CreateResponse(HttpStatusCode.InternalServerError);
response.ReasonPhrase = errorMsg;
}
return response;
}
Sometimes this method is called very quickly and I get an error here:
// Mongo
MongoHelper mgHelper = new MongoHelper();
mgHelper.InsertCommissioning(serial, withChildren);
Here I'm inserting the serials I received in order, and sometimes I get an error with a duplicated key in MongoDb:
I have a method to get the latest id used in Mongo (the primary key). And two requests get the same id, so when I try to insert it on Mongo I get an invalid key exception.
I thought to use a queue to store the serials and then consume them in the same order that I have received them. But I think I will get the same error when I try to store the serial in MongoDb.
Maybe if I can set a method that if it is running, I have to wait to run it, it will works. This method will have the part of insert the serials into Mongo.
How can I do that? A method that if it is running you can't run it in another Web Api request.
Or, do you know a better option?
By the way, I can't block this method. Maybe I need to run a thread with this synchronized part.

Closing WCF Service from Async method?

I have a service layer project on an MVC 5 ASP.NET application I am creating on .NET 4.5.2 which calls out to an External 3rd Party WCF Service to Get Information asynchronously. An original method to call external service was as below (there are 3 of these all similar in total which I call in order from my GetInfoFromExternalService method (note it isnt actually called that - just naming it for illustration)
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
try
{
if (_externalpServiceClient == null)
{
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
}
string tokenId= await _externalpServiceClient .GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message);
}
finally
{
CloseExternalServiceClient(_externalpServiceClient);
_externalpServiceClient= null;
}
}
So that meant that when each async call had completed the finally block ran - the WCF client was closed and set to null and then newed up when another request was made. This was working fine until a change needed to be made whereby if the number of cars passed in by User exceeds 1000 I create a Split Function and then call my GetInfoFromExternalService method in a WhenAll with each 1000 - as below:
if (cars.Count > 1000)
{
const int packageSize = 1000;
var packages = SplitCarss(cars, packageSize);
//kick off the number of split packages we got above in Parallel and await until they all complete
await Task.WhenAll(packages.Select(GetInfoFromExternalService));
}
However this now falls over as if I have 3000 cars the method call to GetTokenId news up the WCF service but the finally blocks closes it so the second batch of 1000 that is attempting to be run throws an exception. If I remove the finally block the code works ok - but it is obviously not good practice to not be closing this WCF client.
I had tried putting it after my if else block where the cars.count is evaluated - but if a User uploads for e.g 2000 cars and that completes and runs in say 1 min - in the meantime as the user had control in the Webpage they could upload another 2000 or another User could upload and again it falls over with an Exception.
Is there a good way anyone can see to correctly close the External Service Client?
Based on the related question of yours, your "split" logic doesn't seem to give you what you're trying to achieve. WhenAll still executes requests in parallel, so you may end up running more than 1000 requests at any given moment of time. Use SemaphoreSlim to throttle the number of simultaneously active requests and limit that number to 1000. This way, you don't need to do any splits.
Another issue might be in how you handle the creation/disposal of ExternalServiceClient client. I suspect there might a race condition there.
Lastly, when you re-throw from the catch block, you should at least include a reference to the original exception.
Here's how to address these issues (untested, but should give you the idea):
const int MAX_PARALLEL = 1000;
SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(MAX_PARALLEL);
volatile int _activeClients = 0;
readonly object _lock = new Object();
ExternalServiceClient _externalpServiceClient = null;
ExternalServiceClient GetClient()
{
lock (_lock)
{
if (_activeClients == 0)
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
_activeClients++;
return _externalpServiceClient;
}
}
void ReleaseClient()
{
lock (_lock)
{
_activeClients--;
if (_activeClients == 0)
{
_externalpServiceClient.Close();
_externalpServiceClient = null;
}
}
}
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
var client = GetClient();
try
{
await _semaphoreSlim.WaitAsync().ConfigureAwait(false);
try
{
string tokenId = await client.GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message, ex);
}
finally
{
_semaphoreSlim.Release();
}
}
finally
{
ReleaseClient();
}
}
Updated based on the comment:
the External WebService company can accept me passing up to 5000 car
objects in one call - though they recommend splitting into batches of
1000 and run up to 5 in parallel at one time - so when I mention 7000
- I dont mean GetTokenIdForCarAsync would be called 7000 times - with my code currently it should be called 7 times - i.e giving me back 7
token ids - I am wondering can I use your semaphore slim to run first
5 in parallel and then 2
The changes are minimal (but untested). First:
const int MAX_PARALLEL = 5;
Then, using Marc Gravell's ChunkExtension.Chunkify, we introduce GetAllTokenIdForCarsAsync, which in turn will be calling GetTokenIdForCarsAsync from above:
private async Task<string[]> GetAllTokenIdForCarsAsync(Car[] cars)
{
var results = new List<string>();
var chunks = cars.Chunkify(1000);
var tasks = chunks.Select(chunk => GetTokenIdForCarsAsync(chunk)).ToArray();
await Task.WhenAll(tasks);
return tasks.Select(task => task.Result).ToArray();
}
Now you can pass all 7000 cars into GetAllTokenIdForCarsAsync. This is a skeleton, it can be improved with some retry logic if any of the batch requests has failed (I'm leaving that up to you).

Multithreaded linq2sql applications TransactionScope difficulties

I've created a file processing service which reads and imports xml files from a specific directory.
The service starts several workers which will poll a filequeue for new files and uses linq2sql for dataaccess. Each workerthread has its own datacontext.
The files being processed contain several orders and each order contains several addresses (Customer/Contractor/Subcontractor)
I've defined a transactionscope around the handling of each file. This way I want to ensure that the whole file is handled correctly, or that the whole file is rolled back when an exception occurs:
try
{
using (var tx = new TransactionScope(TransactionScopeOption.RequiresNew))
{
foreach (var order in orders)
{
HandleType1Order(order);
}
tx.Complete();
}
}
catch (SqlException ex)
{
if (ex.Number == SqlErrorNumbers.Deadlock)
{
throw new FileHandlerException("File Caused a Deadlock, retrying later", ex, true);
}
else
throw;
}
One of the requirements for the service is that is creates or updates found addresses in the xml files. So I've created an address service which is responsible for address management. The following piece of code gets executed for each order (within the method HandleType1Order()) in the xml importfile (And thus is part of the TransactionScope for the entire file).
using (var tx = new TransactionScope())
{
address = GetAddressByReference(number);
if (address != null) //address is already known
{
Log.Debug("Found address {0} - {1}. Updating...", address.Code, address.Name);
UpdateAddress(address, name, number, isContractor, isSubContractor, isCustomer);
}
else
{
//address not known, so create it
Log.Debug("Address {0} not known, creating address", number);
address = CreateAddress(name, number, sourceSystemId, isContractor, isSubContractor,
isCustomer);
_addressRepository.Save(address);
}
_addressRepository.Flush();
tx.Complete();
}
What I'm trying to do here, is to create or update an address, with the number being unique.
The method GetAddressByReference(string number) returns a known address or null when an address is not found.
public virtual Address GetAddressByReference(string reference)
{
return _addressRepository.GetAll().SingleOrDefault(a=>a.Code==reference);
}
When I run the service it however creates multiple addresses with the same number. The method GetAddressByReference() get's called concurrently and should return a known address when a second thread executes the method with the same addressnumber, however it returns null. There is propably something wrong with my transaction boundaries, or isolationlevel, but I can't seem to get it to work.
Can someone point me in the right direction? Help is much appreciated!!
p.s. I've no problem with the transactions being deadlocked and causing a rollback, the file will just be retried when a deadlock occurs.
Edit 1 Threading code:
public void Work()
{
_isRunning = true;
while (true)
{
ImportFileTask task = _queue.Dequeue(); //dequeue blocks on empty queue
if (task == null)
break; //Shutdown worker when a null task is read from the queue
IFileImporter importer = null;
try
{
using (new LockFile(task.FilePath).Acquire()) //create a filelock to sync access accross all processes to the file
{
importer = _kernel.Resolve<IFileImporter>();
Log.DebugFormat("Processing file {0}", task.FilePath);
importer.Import(task.FilePath);
Log.DebugFormat("Done Processing file {0}", task.FilePath);
}
}
catch(Exception ex)
{
Log.Fatal(
"A Fatal exception occured while handling {0} --> {1}".FormatWith(task.FilePath, ex.Message), ex);
}
finally
{
if (importer != null)
_kernel.ReleaseComponent(importer);
}
}
_isRunning = false;
}
The above method runs in all of our worker threads. It uses Castle Windsor to resolve the FileImporter, which has a transient lifestyle (thus not shared accross threads).
You didn't post your threading code, so its difficult to say what the issue is. I'm assuming you have started DTC (Distributed Transaction Coordinator)?
Are you using a ThreadPool? Are you using the "lock" keyword?
http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx

Categories

Resources