Protecting critical sections based on a condition in C# - c#

I'm dealing with a courious scenario.
I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid.
I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that:
static readonly object LockableObject = new object();
static void SaveElement(Entity e)
{
lock(LockableObject)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
But this implies when i have a huge ammount of requests to be saved, they will be queued.
I wonder if there is something like that (please, take it just as an idea):
static void SaveElement(Entity e)
{
(using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property.
Any idea?

Don't use application-locks where database transactions or constraints are needed.
The use of a lock to prevent duplicate entries in a database is not a good idea. It limits the scalability of your application be forcing only a single instance to ever exist that can add or update such records. Or worse, someone will eventually try to scale the application to multiple processes or servers and it will cause data corruption (since locks are local to a single process).
What you should consider instead is using a combination of unique constraints in the database and transactions to ensure that no two attempts to add the same entry can both succeed. One will succeed - the other will be forced to rollback.

This might work for you, you can just lock on the instance of e:
lock(e)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}

Related

Alternative to a monitor to checking for the existence of a record, and writing it if it doesn't exist

So I have a bit of C# code that looks like the below (simplified for the purpose of the question, any bugs are from me making these changes). This code can be called from multiple threads or contexts, in an asynchronous fashion. The whole purpose of this is to make sure that if a record already exists, it is used, and if it doesn't it gets created. May or may not be great design, but this works as expected.
var timeout = TimeSpan.FromMilliseconds(500);
bool lockTaken = false;
try
{
Monitor.TryEnter(m_lock, timeout, ref lockTaken); // m_lock declared statically above
if (lockTaken)
{
var myDBRecord = _DBContext.MyClass.SingleOrDefault(x => x.ForeignKeyId1 == ForeignKeyId1
&& x.ForeignKeyId2 == ForeignKeyId2);
if (myDBRecord == null)
{
myDBRecord = new MyClass
{
ForeignKeyId1 == ForeignKeyId1,
ForeignKeyId2 == ForeignKeyId2
// ...datapoints
};
_DBContext.MyClass.Add(myDBRecord);
_DBContext.SaveChanges();
}
}
else
{
throw new Exception("Can't get lock");
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(m_lock);
}
}
The problem occurs if there are a lot of requests that come in, it can overwhelm the monitor, timing out if it has to wait too long. While the timeout for the lock can certainly be shorter, what is the preferred approach, if any, to addressing this type of a problem? Anything that would try to see if the monitor'd code needed to be entered would need to be part of that atomic operation.
I would suggest that you get rid of the monitor altogether and instead handle the duplicate key exception. You have to handle the condition where you are trying to enter a duplicate value anyway, why not do so directly?

How to use Lazy to handle concurrent request?

I'm new in C# and trying to understand how to work with Lazy.
I need to handle concurrent request by waiting the result of an already running operation. Requests for data may come in simultaneously with same/different credentials.
For each unique set of credentials there can be at most one GetDataInternal call in progress, with the result from that one call returned to all queued waiters when it is ready
private readonly ConcurrentDictionary<Credential, Lazy<Data>> Cache
= new ConcurrentDictionary<Credential, Lazy<Data>>();
public Data GetData(Credential credential)
{
// This instance will be thrown away if a cached
// value with our "credential" key already exists.
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential),
LazyThreadSafetyMode.ExecutionAndPublication
);
Lazy<Data> lazy = Cache.GetOrAdd(credential, newLazy);
bool added = ReferenceEquals(newLazy, lazy); // If true, we won the race.
Data data;
try
{
// Wait for the GetDataInternal call to complete.
data = lazy.Value;
}
finally
{
// Only the thread which created the cache value
// is allowed to remove it, to prevent races.
if (added) {
Cache.TryRemove(credential, out lazy);
}
}
return data;
}
Is that right way to use Lazy or my code is not safe?
Update:
Is it good idea to start using MemoryCache instead of ConcurrentDictionary? If yes, how to create a key value, because it's a string inside MemoryCache.Default.AddOrGetExisting()
This is correct. This is a standard pattern (except for the removal) and it's a really good cache because it prevents cache stampeding.
I'm not sure you want to remove from the cache when the computation is done because the computation will be redone over and over that way. If you don't need the removal you can simplify the code by basically deleting the second half.
Note, that Lazy has a problem in the case of an exception: The exception is stored and the factory will never be re-executed. The problem persists forever (until a human restarts the app). In my mind this makes Lazy completely unsuitable for production use in most cases.
This means that a transient error such as a network issue can render the app unavailable permanently.
This answer is directed to the updated part of the original question. See #usr answer regarding thread-safety with Lazy<T> and the potential pitfalls.
I would like to know how to avoid using ConcurrentDictionary<TKey, TValue> and start
using MemoryCache? How to implement
MemoryCache.Default.AddOrGetExisting()?
If you're looking for a cache which has a mechanism for auto expiry, then MemoryCache is a good choice if you don't want to implement the mechanics yourself.
In order to utilize MemoryCache which forces a string representation for a key, you'll need to create a unique string representation of a credential, perhaps a given user id or a unique username?
If you can, you can create an override of ToString which represents your unique identifier or simply use the said property, and utilize MemoryCache like this:
public class Credential
{
public Credential(int userId)
{
UserId = userId;
}
public int UserId { get; private set; }
}
And now your method will look like this:
private const EvictionIntervalMinutes = 10;
public Data GetData(Credential credential)
{
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential), LazyThreadSafetyMode.ExecutionAndPublication);
CacheItemPolicy evictionPolicy = new CacheItemPolicy
{
AbsoluteExpiration = DateTimeOffset.UtcNow.AddMinutes(EvictionIntervalMinutes)
};
var result = MemoryCache.Default.AddOrGetExisting(
new CacheItem(credential.UserId.ToString(), newLazy), evictionPolicy);
return result != null ? ((Lazy<Data>)result.Value).Value : newLazy.Value;
}
MemoryCache provides you with a thread-safe implementation, this means that two threads accessing AddOrGetExisting will only cause a single cache item to be added or retrieved. Further, Lazy<T> with ExecutionAndPublication guarantess only a single unique invocation of the factory method.

MongoServer.State equivalent in the 2.0 driver

In the old API (1.X) you could tell whether the server was connected or not by using the State property on the MongoServer instance returned from MongoClient.GetServer:
public bool IsConnceted
{
get
{
return _client.GetServer().State == MongoServerState.Connected;
}
}
However GetServer is not a part of the new API (2.0). How can that be achieved?
The more appropriate way to do that is not by checking the server but rather the cluster (which may contain multiple servers) and you can access it directly from the MongoClient instance:
public bool IsClusterConnceted
{
get
{
return _client.Cluster.Description.State == ClusterState.Connected;
}
}
If you would like to check a specific server that's also possible:
public bool IsServerConnceted
{
get
{
return _client.Cluster.Description.Servers.Single().State == ServerState.Connected;
}
}
Keep in mind that the value is updated by the last operation so it may not be current. The only way to actually make sure there's a valid connection is to execute some kind of operation.
As noted by i3arnon, one has to perform some sort of operation on the database before the state is updated properly.
The act of enumerating the databases is sufficient to update the state.
This worked for me:
var databases = _client.ListDatabasesAsync().Result;
databases.MoveNextAsync(); // Force MongoDB to connect to the database.
if (_client.Cluster.Description.State == ClusterState.Connected)
{
// Database is connected.
}

Using SQL Server application locks to solve locking requirements

I have a large application based on Dynamics CRM 2011 that in various places has code that must query for a record based upon some criteria and create it if it doesn't exist else update it.
An example of the kind of thing I am talking about would be similar to this:
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
record.Id = Guid.NewGuid();
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
In terms of CRM 2011 implementation (although not strictly relevant to this question) the code could be triggered from synchronous or asynchronous plugins. The issue is that the code is not thread safe, between checking if the record exists and creating it if it doesn't, another thread could come in and do the same thing first resulting in duplicate records.
Normal locking methods are not reliable due to the architecture of the system, various services using multiple threads could all be using the same code, and these multiple services are also load balanced across multiple machines.
In trying to find a solution to this problem that doesn't add massive amounts of extra complexity and doesn't compromise the idea of not having a single point of failure or a single point where a bottleneck could occur I came across the idea of using SQL Server application locks.
I came up with the following class:
public class SQLLock : IDisposable
{
//Lock constants
private const string _lockMode = "Exclusive";
private const string _lockOwner = "Transaction";
private const string _lockDbPrincipal = "public";
//Variable for storing the connection passed to the constructor
private SqlConnection _connection;
//Variable for storing the name of the Application Lock created in SQL
private string _lockName;
//Variable for storing the timeout value of the lock
private int _lockTimeout;
//Variable for storing the SQL Transaction containing the lock
private SqlTransaction _transaction;
//Variable for storing if the lock was created ok
private bool _lockCreated = false;
public SQLLock (string lockName, int lockTimeout = 180000)
{
_connection = Connection.GetMasterDbConnection();
_lockName = lockName;
_lockTimeout = lockTimeout;
//Create the Application Lock
CreateLock();
}
public void Dispose()
{
//Release the Application Lock if it was created
if (_lockCreated)
{
ReleaseLock();
}
_connection.Close();
_connection.Dispose();
}
private void CreateLock()
{
_transaction = _connection.BeginTransaction();
using (SqlCommand createCmd = _connection.CreateCommand())
{
createCmd.Transaction = _transaction;
createCmd.CommandType = System.Data.CommandType.Text;
StringBuilder sbCreateCommand = new StringBuilder();
sbCreateCommand.AppendLine("DECLARE #res INT");
sbCreateCommand.AppendLine("EXEC #res = sp_getapplock");
sbCreateCommand.Append("#Resource = '").Append(_lockName).AppendLine("',");
sbCreateCommand.Append("#LockMode = '").Append(_lockMode).AppendLine("',");
sbCreateCommand.Append("#LockOwner = '").Append(_lockOwner).AppendLine("',");
sbCreateCommand.Append("#LockTimeout = ").Append(_lockTimeout).AppendLine(",");
sbCreateCommand.Append("#DbPrincipal = '").Append(_lockDbPrincipal).AppendLine("'");
sbCreateCommand.AppendLine("IF #res NOT IN (0, 1)");
sbCreateCommand.AppendLine("BEGIN");
sbCreateCommand.AppendLine("RAISERROR ( 'Unable to acquire Lock', 16, 1 )");
sbCreateCommand.AppendLine("END");
createCmd.CommandText = sbCreateCommand.ToString();
try
{
createCmd.ExecuteNonQuery();
_lockCreated = true;
}
catch (Exception ex)
{
_transaction.Rollback();
throw new Exception(string.Format("Unable to get SQL Application Lock on '{0}'", _lockName), ex);
}
}
}
private void ReleaseLock()
{
using (SqlCommand releaseCmd = _connection.CreateCommand())
{
releaseCmd.Transaction = _transaction;
releaseCmd.CommandType = System.Data.CommandType.StoredProcedure;
releaseCmd.CommandText = "sp_releaseapplock";
releaseCmd.Parameters.AddWithValue("#Resource", _lockName);
releaseCmd.Parameters.AddWithValue("#LockOwner", _lockOwner);
releaseCmd.Parameters.AddWithValue("#DbPrincipal", _lockDbPrincipal);
try
{
releaseCmd.ExecuteNonQuery();
}
catch {}
}
_transaction.Commit();
}
}
I would use this in my code to create a SQL Server application lock using the unique key I am querying for as the lock name like this
using (var sqlLock = new SQLLock(id))
{
//Code to check for and create or update record here
}
Now this approach seems to work, however I am by no means any kind of SQL Server expert and am wary about putting this anywhere near production code.
My question really has 3 parts
1. Is this a really bad idea because of something I haven't considered?
Are SQL Server application locks completely unsuitable for this purpose?
Is there a maximum number of application locks (with different names) you can have at a time?
Are there performance considerations if a potentially large number of locks are created?
What else could be an issue with the general approach?
2. Is the solution actually implemented above any good?
If SQL Server application locks are usable like this, have I actually used them properly?
Is there a better way of using SQL Server to achieve the same result?
In the code above I am getting a connection to the Master database and creating the locks in there. Does that potentially cause other issues? Should I create the locks in a different database?
3. Is there a completely alternative approach that could be used that doesn't use SQL Server application locks?
I can't use stored procedures to create and update the record (unsupported in CRM 2011).
I don't want to add a single point of failure.
You can do this much easier.
//make sure your plugin runs within a transaction, this is the case for stage 20 and 40
//you can check this with IExecutionContext.IsInTransaction
//works not with offline plugins but works within CRM Online (Cloud) and its fully supported
//also works on transaction rollback
var lockUpdateEntity = new dummy_lock_entity(); //simple technical entity with as many rows as different lock barriers you need
lockUpdateEntity.Id = Guid.parse("well known guid"); //well known guid for this barrier
lockUpdateEntity.dummy_field=Guid.NewGuid(); //just update/change a field to create a lock, no matter of its content
//--------------- this is untested by me, i use the next one
context.UpdateObject(lockUpdateEntity);
context.SaveChanges();
//---------------
//OR
//--------------- i use this one, but you need a reference to your OrganizationService
OrganizationService.Update(lockUpdateEntity);
//---------------
//threads wait here if they have no lock for dummy_lock_entity with "well known guid"
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
//record.Id = Guid.NewGuid(); //not needed
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
//let the pipeline flow and the transaction complete ...
For more background info refer to http://www.crmsoftwareblog.com/2012/01/implementing-robust-microsoft-dynamics-crm-2011-auto-numbering-using-transactions/

Entity Framework SaveChanges does not always work

I am using EF 4.2 and am having an issue that happens quite randomly and without warning. I have a Windows Service which updates the database. In the service I have a timer. When the time of the timer elapses a method gets called. This is the basic structure of the method
IEnumerable<Foo> foos = GetFoosFromDB();
foreach (Foo foo in foos)
{
if (some condition)
{
foo.Bar = 1;
}
if (some other condition)
{
foo.Bar = 2;
}
if (yet some other condition)
{
foo.Bar = 3;
}
else
{
int val = GetSomeValueFromDB();
if (val == something)
{
if(GetSomeOtherValueFromDB())
{
foo.Bar = 4;
}
else
{
CallSomeMethodThatAlsoCallsSaveChanges();
foo.Bat = SomeCalculatedValue();
}
}
}
}
SaveChanges();
Now, the problem is that once we start working with the database for a day and there are a few rows in the tables of that database (we are talking about 100 or 200 rows only), then even though this method is called, SaveChanges doesn't seem to do what it should do. What am I doing wrong?
thanks,
Sachin
Ignoring the other aspects of the code, this line stuck out as a likely problem:
else
{
CallSomeMethodThatAlsoCallsSaveChanges();
foo.Bat = SomeCalculatedValue();
}
// a few }} later...
SaveChanges();
When this logic branch is executed, your context's pending changes are committed to the DB (based on what you've provided). Depending on how you're creating and managing your db context objects, you've either cleared the modified list, or you've introduced potential change conflicts. When SaveChanges() is called after the loop, it may or may not have pending changes to commit (depends on whether the conditional logic called your other method).
Consider what logical unit(s) of work are being performed with this logic and keep those UoW atomically separated. Think about how your DB contexts are being created, managed, and passed around, since those maintain the local state of your objects.
If you are still having trouble, you can post more of your code and we can attempt to troubleshoot futher

Categories

Resources