Will following ever cause RACE CONDITION when accessing ASP.Net Cache Item? - c#

I am using the following code, and to me it seems that a race condition would never happen with code below. Or there is still a chance of race condition?
List<Document> listFromCache = Cache[dataCacheName] as List<Document>;
if (listFromCache != null)
{
//do something with listFromCache. **IS IT POSSIBLE** that listFromCache is
//NULL here
}
else
{
List<Document> list = ABC.DataLayer.GetDocuments();
Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5),
System.Web.Caching.Cache.NoSlidingExpiration);
}
UPDATE:
Chris helped me solve this problem, but I just thought, I would share some details that would be be very helpful to others.
To completely avoid any race condition, I had to add a check within the true part, else I could end up with a List with zero count, if someone else clears it in Cache ( not remove the item, but just call Clear method on the List object in Cache) after my if has evaluated to TRUE. So then, I would not have any data within my true part of if in listFromCache object.
To overcome, this subtle RACE condition in my original code, I have to double check listFromCache in the true part as in code below, and then repopulate Cache with latest data.
Also, as Chris said, if someone else 'removes' the items from Cache by calling the method Cache.Remove, then listFromCache would not be affected, since the Garbage Collector will not remove the actual List object from HEAP memory because a variable called 'listFromCache' is still having a reference to it ( I have explained this in more detail in a comment under Chris's answer post).
List<Document> listFromCache = Cache[dataCacheName] as List<Document>;
if (listFromCache != null)
{
//OVERCOME A SUBTLE RACE CONDITION BY IF BELOW
if( listFromCache == null || listFromCache.Count == 0)
{
List<Document> list = ABC.DataLayer.GetDocuments();
Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5),
System.Web.Caching.Cache.NoSlidingExpiration);
}
//NOW I AM SURE MY listFromCache contains true data
//do something with listFromCache. **IS IT POSSIBLE** that listFromCache is
//NULL here
}
else
{
List<Document> list = ABC.DataLayer.GetDocuments();
Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5),
System.Web.Caching.Cache.NoSlidingExpiration);
}

No, it's not possible in your comment that listFromCache will become null as it's a local reference at that point. If the cache entry is nullified elsewhere, it doesn't affect your local reference. However, you could possibly get a condition where you retrieved a null value, but while in the process of gathering the documents (ABC.DataLayer.GetDocuments()) another process has already done so and inserted the cache entry, at which point you overwrite it. (this may be perfectly acceptable for you, in which case, great!)
You could try locking around it with a static object, but honestly, I'm not sure if that'll work in an ASP.NET context. I don't remember if Cache is shared across all ASP.NET processes (which IIRC, have different static contexts) or only shared within each single web worker. If the latter, the static lock will work fine.
Just to demonstrate too:
List<Document> listFromCache = Cache[dataCacheName] as List<Document>;
if (listFromCache != null)
{
Cache.Remove(dataCacheName);
//listFromCache will NOT be null here.
if (listFromCache != null)
{
Console.WriteLine("Not null!"); //this will run because it's not null
}
}

Related

Do Loop, an efficient break method?

I have a section inside a method that does something similar too:
do
{
// some query
// another query
if another query != null
{
// return something
}
}while(some query != null)
Obviously this does not work, because some query is not declared until it is inside the loop. One solution I tried was
bool flag = false;
do
{
// some query
if some query == null
{
flag = true;
// or
break;
}
// another query
if another query != null
{
// return something
}
}while(flag != true)
Neither method really satisfies me, and quite honestly I am not sure if they would be considered good coding practice, which irks me. Moreover this has pretty much has been my go to solution in cases like this up until this point, but due to the garbage nature of the flag, I wanted to find out if there is a better way to handle this for future reference, instead of making a junk variable. I should note the other solution which I thought of would arguably be uglier. That solution being to run the query once outside the loop, and convert it into a while loop and recall the query again inside itself, rather than a do loop.
While the code works with the above solution, I was wondering if anyone had a better solution that does not require an arguably pointless variable.
Though I understand that such a better solution may not be possible, or really even needed, it could be ridiculous to even try.
Having a break or flag variable isn't what would make something inefficient, it's what inside the loop that should be your concern. In other words it's just a preference and either is fine.
I think you need
while(true)
{
// some query
if some query == null
{
break;
}
// another query
if another query != null
{
// return something
}}
You can try this:
do
{
// some query
if some query == null
{
break;
}
// another query
if another query != null
{
// return something
}
}while(true);

Avoid locking an Object that's already in the session (UniqueObjectException)

I call this function on my objects when they need to be initialized again:
public virtual void Initialize()
{
if (!HibernateSessionManager.Instance.GetSession().Contains(this)) {
try
{
HibernateSessionManager.Instance.GetSession()
.Lock(this, NHibernate.LockMode.None);
}
catch (NonUniqueObjectException e) { }
}
}
I thought I can prevent initializing something twice with checking Contains(this), but it happens sometimes that Lock(this, NHibernate.LockMode.None) throws a NonUniqueObjectException. So far I ignore it because it works, but I'd like to know the reason and a better way to Lock my objects.
Best regards, Expecto
This most likely means you violate the Identity map somewhere. This means you have two instances of an object hanging around with the same database ID, but different referential identity.
Session.Contains will check reference equality, but Lock will throw an exception if there's anything with the same type & id already in the session, which is a much less strict comparison.
Consider the following test on the AdventureWorks database, with a (very naive and unrecommended) simple implementation of Equals & GetHashCode
using (ISession session = SessionFactory.Factory.OpenSession())
{
int someId = 329;
Person p = session.Get<Person>(someId);
Person test = new Person() { BusinessEntityID = someId };
Assert.IsTrue(p.Equals(test)); //your code might think the objects are equal, so you'd probably expect the next line to return true
Assert.IsFalse(session.Contains(test)); //But they're not the same object
Assert.Throws<NonUniqueObjectException>(() =>
{
session.Lock(test, LockMode.None); //So when you ask nhibernate to track changes on both objects, it gets very confused
});
}
NHibernate (and I'd guess any ORM) works by tracking changes to objects. So in Get'ing Person 329, you ask NHibernate to pay attention to whatever happens to that particular instance of a Person. Let's say we change his first-name to Jaime.
Next, we get another instance of person with the same Id (in this case we just new'ed it up, but there are many insidious ways to get such an object). Imagine NHibernate would let us attach this to the session as well. We could even set the first-name of this second object to something like Robb.
When we flush the session NHibernate has no way of knowing whether the database row needs to be synched to either Robb or Jaime. So it throws the non-unique your way before that could happen.
Ideally these situations shouldn't crop up, but if you're very sure what's happening, you might want to check out session.Merge, which lets you force the tracked state to whatever happens to be merged in last (Robb in the example).
the problem was a completely different - contains checks for equality by reference if I don't override Equals(). Now it works with the code from my question!
public override bool Equals(object obj)
{
if (this == obj) {
return true;
}
if (GetType() != obj.GetType()) {
return false;
}
if (Id != ((BaseObject)obj).Id)
{
return false;
}
return true;
}

Why do I need to perform a deep clone to get this code to work?

The following code works. However, it works because I end up creating a deep clone of the suppliers. If I do not perform a deep clone then we get an error suggesting that the supplier objects have changed and the attempt to amend the supplier table has failed. This only happens if the following line is run: foreach (Supplier suppliers in exceptions). Oddly, this occurs irrespective whether the Delete() method is executed. Why does this happen? I have posted the working code below for your inspection. As I say, if you try looping without deep cloning then it does not work... Any ideas?
public void DeleteSuppliers(IList<Supplier> suppliers, Int32 parentID)
{
// If a supplier has been deleted on the form we need to delete from the database.
// Get the suppliers from the database
List<Supplier> dbSuppliers = Supplier.FindAllByParentID(parentID);
// So return any suppliers that are in the database that are not now on this form
IEnumerable<Supplier> results = dbSuppliers.Where(f => !Suppliers.Any(d => d.Id == f.Id));
IList<Supplier> exceptions = null;
// code guard
if (results != null)
{
// cast as a list
IList<Supplier> tempList = (IList<Supplier>)results.ToList();
// deep clone otherwise there would be an error
exceptions = (IList<Supplier>)ObjectHelper.DeepClone(tempList);
// explicit clean up
tempList = null;
}
// Delete the exceptions from the database
if (exceptions != null)
{
// walk the suppliers that were deleted from the form
foreach (Supplier suppliers in exceptions)
{
// delete the supplier from the database
suppliers.Delete();
}
}
}
I think the error is about the collection being enumerated having changed. You're not allowed to change the collection being enumerated by a foreach statement (or anything that enumerated an IEnumerable, if I recall correctly).
But if you make a clone then the collection you're enumerating is separate from the collection being affected by the Delete.
Have you tried a shallow copy? I would think that would work just as well. A shallow copy could be created with ToArray.
I resolved the issue by reordering the execution flow. Originally, this piece of code was execute last. The error went away when I executed it first.

Good practices for initialising properties?

I have a class property that is a list of strings, List.
Sometimes this property is null or if it has been set but the list is empty then count is 0.
However elsewhere in my code I need to check whether this property is set, so currently my code check whether it's null and count is 0 which seems messy.
if(objectA.folders is null)
{
if(objectA.folders.count == 0)
{
// do something
}
}
Any recommendation on how this should be handled?
Maybe I should always initialise the property so that it's never null?
When I have List as a property, I usually have something that looks like the following (this is not a thread safe piece of code):
public class SomeObject
{
private List<string> _myList = null;
public List<string> MyList
{
get
{
if(_myList == null)
_myList = new List<string>();
return _myList;
}
}
}
Your code would then never have to check for null because the Property would be initialized if used. You would then only have to check for the Count.
Right now your code will Always throw a Null Pointer exception, you are checking for Null and if it IS null - you're trying to access an object which does not exist.
If for your application the collection being a null reference never has a different meaning than the collection being empty, then yes, I would say you should always initialize it and this way remove the null checks from the remaining code.
This approach only makes sense if the property setter does not allow to change it to a null reference after initialization.
You have three options (and you need to decide based on your project):
Create a method to check for NullOrNoElements. Pro: Allows both null and no entries. Con: You have to call it everywhere you want to use the property.
Preinitialize with a list. Pro: Thread-save and very easy. Con: will use memory even when not used (depending on how many instances you have this may be a problem)
Lazy initialize Pro: Does only use memory when really used. Con: NOT thread save.
private List<string> lp = null;
public List<string> ListProp
{
get
{
if(lp == null)
lp = new List<string>();
return lp;
}
}
You could always initialize the property so it's an empty List. Then you can just check the count property.
List<String> Folder = Enumerable.Empty<String>();
I once wrote an extension method for ICollection objects that checked if they were null or empty
public static Boolean IsNullOrEmpty<T>(this ICollection<T> collection)
{
return collection == null ? true : collection.Count() == 0;
}
public static Boolean IsPopulated<T>(this ICollection<T> collection)
{
return collection != null ? collection.Count() > 0 : false;
}
You could do this in a single IF
if(objectA.folders is null || objectA.folders.count == 0)
Or you could create a boolean property in the class which checks this status for you and returns a result
public bool objectA.FolderIsNullOrEmpty
{
get { return objectA.folders is null || objectA.folders.count == 0;}
}
If it does not make a difference to your application, I would rather recomend initializing the List to start with.
You could handle this by initializing the object in the constructor. This is usually where this type of thing is done. Although I see nothing wrong with your current code. No point in initializing stuff that doesn't exist yet, it just wastes memory.
Its a good question. I would add a method to objectA FoldersNullOrEmpty() that you can use eg
public virtual FoldersNullOrEmpty()
{
return (folders == null || folders.count == 0)
}
I almost always initialize lists and even make sure they can't be set to null if exposed by any setters. This makes using them much easier.

Refactor help c#

I have several hundred lines of code like this:
if (c.SomeValue == null || c.SomeProperty.Status != 'Y')
{
btnRecordCall.Enabled = false;
}
if (c.SomeValue == null || (c.SomeProperty.Status != 'Y' &&
c.SomeOtherPropertyAction != 'Y'))
{
btnAddAction.Enabled = false;
}
if (c.SomeValue == null || c.SomeProperty.Processing != 'Y')
{
btnProcesss.Enabled = false;
}
How can I refactor this correctly? I see that the check 'c.SomeValue == null' is being called every time, but it is included with other criteria. How can I possibly eliminate this duplicate code?
I would use the specification pattern, and build composite specifications that map to a proper Enabled value.
The overall question you want to answer is whether some object c satisfies a given condition, which then allows you to decide if you want something enabled. So then you have this interface:
interface ICriteria<T>
{
bool IsSatisfiedBy(T c);
}
Then your code will look like this:
ICriteria<SomeClass> cr = GetCriteria();
btnAddAction.Enabled = cr.IsSatisfiedBy(c);
The next step is to compose a suitable ICriteria object. You can have another ICriteria implementation, (in additon to Or and And), called PredicateCriteria which looks like this:
class PredicateCriteria<T> : ICriteria<T>
{
public PredicateCriteria(Func<T, bool> p) {
this.predicate = p;
}
readonly Func<T, bool> predicate;
public bool IsSatisfiedBy(T item) {
return this.predicate(item);
}
}
One instance of this would be:
var c = new PredicateCriteria<SomeClass>(c => c.SomeValue != null);
The rest would be composition of this with other criteria.
If you don't want to do much refactoring, you can easily pull the null check out.
if (c.SomeValue == null)
{
btnRecordCall.Enabled = false;
btnAddAction.Enabled = false;
btnProcesss.Enabled = false;
}
else
{
if(c.SomeProperty.Status != 'Y')
{
btnRecordCall.Enabled = false;
}
if((c.SomeProperty.Status != 'Y') &&
(c.SomeOtherPropertyAction != 'Y'))
{
btnAddAction.Enabled = false;
}
if(c.SomeProperty.Processing != 'Y')
{
btnProcesss.Enabled = false;
}
}
If you're looking to refactor instead of shuffle, the wall of boolean testing could be moved in to methods/extension methods of whatever class your object c is an instance of - that way you could say
btnRecordCall.Enabled = c.IsRecordCallAllowed();
Create properties on "c" such as "CanRecordCall", "CanAddAction", "CanProcess" so that your code becomes this:
btnRecordCall.Enabled = c.CanRecordCall;
btnAddAction.Enabled = c.CanAddAction;
btnProcess.Enabled = c.CanProcess;
The "c.SomeValue == null" is a typical response to NullReferenceExceptions. You could improve "c" by initializing its SomeValue property to a null object so that there is never a null reference (just an object that does nothing).
In specific, since you seem to be setting UI elements state, you could consider more of a two-way data binding model where you set up a data context and a control-to-property mapping and let that govern the control state. You can also consider a more heavy-weight solution that would be something like the Validation Application Block from Enterprise Library. There are also some fluent validation projects that you should take a look at.
I'd start by making sure all such code is contiguous. Anything other than this code should be moved before or after the code.
Then, for each reference to a control property, create a corresponding local variable, e.g., processEnabled. Define it before the first if statement. For each such property, move, e.g., btnProcesss.Enabled = false; to the end of this code block, and change "false" to processEnabled. Replace the original with processEnabled = false;.
When the code block has no more references to controls (or to anything else having to do with the UI), select the entire block, from the added variables to the control property sets at the end, and use the Extract Method refactoring. That should leave you with a method that accepts c, and produces values you can later use to set control properties.
You can even get fancier. Instead of individual local variables, define a class that has those "variables" as properties. Do pretty much the same thing, and the extracted method will wind up returning an instance of that class, instead of individual out parameters.
From there, you may start to see more things to clean up in the extracted method, not that you'll have removed anything to do with UI from that code.
I'm guessing the issue here is about 'boolean map' style refactorings, i.e., being able to refactor complementary boolean cases where there might be some gaps and some repetition. Well, if that's what you're after, you can certainly write a tool to do this (it's what I would do). Basically, you need to parse a bunch of if statements and take note of condition combinations that are involved. Then, through some fairly simple logic, you can get your model to spit out a different, more optimized model.
The code you show above is one reason why I love F#. :)
Interestingly, in our current Winforms app, the three conditions would be in three different classes, since each button would be attached to a different Command.
The conditions would be in the CanExecute methods of the commands and control the enable/disable behaviour of the button that triggers the command. The corresponding execution code is in the Execute method of the class.

Categories

Resources