ThreadStatic and ASP.NET - c#

I've got a requirement to protect my business object properties via a list of separate authorization rules. I want my authorization rules to be suspended during various operations such as converting to DTOs and executing validation rules (validating property values the current user does not have authorization to see).
The approach I'm looking at wraps the calls in a scope object that uses a [ThreadStatic] property to determine whether the authorization rules should be run:
public class SuspendedAuthorizationScope : IDisposable
{
[ThreadStatic]
public static bool AuthorizationRulesAreSuspended;
public SuspendedAuthorizationScope()
{
AuthorizationRulesAreSuspended = true;
}
public void Dispose()
{
AuthorizationRulesAreSuspended = false;
}
}
Here is the IsAuthorized check (from base class):
public bool IsAuthorized(string memberName, AuthorizedAction authorizationAction)
{
if (SuspendedAuthorizationScope.AuthorizationRulesAreSuspended)
return true;
var context = new RulesContext();
_rules.OfType<IAuthorizationRule>()
.Where(r => r.PropertyName == memberName)
.Where(r => r.AuthorizedAction == authorizationAction)
.ToList().ForEach(r => r.Execute(context));
return context.HasNoErrors();
}
Here is the ValidateProperty method demonstrating usage (from the base class):
private void ValidateProperty(string propertyName, IEnumerable<IValidationRule> rules)
{
using (new SuspendedAuthorizationScope())
{
var context = new RulesContext();
rules.ToList().ForEach(rule => rule.Execute(context));
if (HasNoErrors(context))
RemoveErrorsForProperty(propertyName);
else
AddErrorsForProperty(propertyName, context.Results);
}
NotifyErrorsChanged(propertyName);
}
I've got some tests around the scoping object that show that the expected/correct value of SuspendedAuthorizationScope.AuthorizationRulesAreSuspended is used as long as a lambda resolves in the scope of the using statement.
Are there any obvious flaws to this design? Is there anything in ASP.NET that I should be concerned with as far as threading goes?

There are two concerns that I see with your proposed approach:
One's failure to use using when creating SuspendedAuthorizationScope will lead to retaining open access beyond intended scope. In other words, an easy to make mistake will cause security hole (especially thinking in terms of future-proofing your code/design when a new hire starts digging in unknown code and misses this subtle case).
Attaching this magic flag to ThreadStatic now magnifies the previous bullet by having possibility of leaving open access to another page since the thread will be used to process another request after it's done with the current page, and its authorization flag has not been previously reset. So now the scope of authorization lingering longer than it should goes not just beyond missing call to .Dispose(), but actually can leak to another request / page and of completely different user.
That said, the approaches I've seen to solving this problem did involve essentially checking the authorization and marking a magic flag that allowed bypass later on, and then resetting it.
Suggestions:
1. To at least solve the worst variant (#2 above), can you move magic cookie to be a member of your base page class, and have it an instance field that is only valid to the scope of that page and not other instances?
2. To solve all cases, is it possible to use Functors or similar means that you'd pass to authorization function, that would then upon successful authorization will launch your Functor that runs all the logic and then guarantees cleanup? See pseudo code example below:
void myBizLogicFunction()
{
DoActionThatRequiresAuthorization1();
DoActionThatRequiresAuthorization2();
DoActionThatRequiresAuthorization3();
}
void AuthorizeAndRun(string memberName, AuthorizedAction authorizationAction, Func privilegedFunction)
{
if (IsAuthorized(memberName, authorizationAction))
{
try
{
AuthorizationRulesAreSuspended = true;
privilegedFunction();
}
finally
{
AuthorizationRulesAreSuspended = true;
}
}
}
With the above, I think it can be thread static as finally is guaranteed to run, and thus authorization cannot leak beyond call to privilegedFunction. I think this would work, though could use validation and validation by others...

If you have complete control over your code and don't care about hidden dependencies due to magic static value you approach will work. Note that you putting big burden on you/whoever supports your code to make sure there is never asynchronous processing inside your using block and each usage of magic value is wrapped with proper using block.
In general it is bad idea because:
Threads and requests are not tied one-to one so you can run into cases when you thread local object is changing state of some other request. This will even more likely to happen in you use ASP.Net MVC4+ with async handlers.
static values of any kind are code smell and you should try to avoid them.
Storing request related information should be done in HttpContext.Items or maybe Session (also session will last much longer and require more careful management of cleaning up state).

My concern would be about the potential delay between the time you leave your using block and the time it takes the garbage collector to get around to disposing of your object. You may be in a false "authorized" state longer than you intend to be.

Related

Memorycache won't store my object

So I've written a couple of wrapper methods around the System.Runtime MemoryCache, to get a general/user bound cache context per viewmodel in my ASP.NET MVC application.
At some point i noticed that my delegate just keeps getting called every time rather than retrieving my stored object for no apparent reason.
Oddly enough none of my unit tests (which use simple data to check it) failed or showed a pattern explaining that.
Here's one of the wrapper methods:
public T GetCustom<T>(CacheItemPolicy cacheSettings, Func<T> createCallback, params object[] parameters)
{
if (parameters.Length == 0)
throw new ArgumentException("GetCustom can't be called without any parameters.");
lock (_threadLock)
{
var mergedToken = GetCacheSignature(parameters);
var cache = GetMemoryCache();
if (cache.Contains(mergedToken))
{
var cacheResult = cache.Get(mergedToken);
if (cacheResult is T)
return (T)cacheResult;
throw new ArgumentException(string.Format("A caching signature was passed, which duplicates another signature of different return type. ({0})", mergedToken));
}
var result = createCallback(); <!-- keeps landing here
if (!EqualityComparer<T>.Default.Equals(result, default(T)))
{
cache.Add(mergedToken, result, cacheSettings);
}
return result;
}
}
I was wondering if anyone here knows about conditions which render an object invalid for storage within the MemoryCache.
Until then i'll just strip my complex classes' properties until storage works.
Experiences would be interesting nevertheless.
There are couple frequent reasons why it may be happening (assuming correct logic to actually add objects to cache/find correct cache instance):
x86 (32bit) process have "very small" amount of memory to deal with - it is relatively easy to consume too much memory outside the cache (or particular instance of the cache) and as result items will be immediately evicted from the cache.
ASP.Net app domain recycles due to variety of reasons will clear out cache too.
Notes
generally you'd store "per user cached information" in session state so it managed appropriately and can be persisted via SQL/other out-of-process state options.
relying on caching per-user objects may not improve performance if you need to support larger number of users. You need to carefully measure impact on the load level you expect to have.

What are the dangers of using Session.SyncRoot for locking per session?

I have a race condition with the following code if two requests come in really close together in an ASP.NET MVC app:
var workload = org.Workloads.SingleOrDefault(p => ...conditions...);
if (workload == null) {
workload = org.CreateWorkload(id);
}
workload and org are EntityFramework objects. The call to CreateWorkload adds a row to a Workloads table in the database. (We really should enforce this with a UNIQUE constraint on the table, but I can't now that the table has some dirty data in it.) Subsequent calls to the Action method that contains this code throws an exception when SingleOrDefault encounters more than one row satisfying the conditions.
So to fix this, I want to lock these lines of code. I don't want it done per request, with static lock object because that slows the site down for every user. What I'd like to do is use Session.SyncRoot for locking. I.e.
Workload workload;
lock (Session.SyncRoot)
{
workload = org.Workloads.SingleOrDefault(p => ...conditions...);
if (workload == null) {
workload = org.CreateWorkload(id);
}
}
I'm not an ASP.NET expert, however, and there are some warning signs showing up in the docs and ReSharper, namely, that it can throw NotImplementedExceptions or be null. However, testing shows that this works just fine.
So, ASP.NET experts, what are the risks of using Session.SyncRoot for this? As an alternative, if Session.SyncRoot is "really risky", could I assign a lock object in the Session collection on Session start up to do the same thing?
The danger only exists if you use a custom session class that implements HttpSessionStateBase but doesn't override the SyncRoot property to do something other than throw a NotImplementedException. The HttpSessionStateWrapper class and the HttpSessionState class DO implement and override the SyncRoot method. So, as long as you're accessing the Session via the HttpSessionStateWrapper or HttpSessionState classes and not a custom class, this will work just fine.

Whose responsibility is it to cache / memoize function results?

I'm working on software which allows the user to extend a system by implementing a set of interfaces.
In order to test the viability of what we're doing, my company "eats its own dog food" by implementing all of our business logic in these classes in the exact same way a user would.
We have some utility classes / methods that tie everything together and use the logic defined in the extendable classes.
I want to cache the results of the user-defined functions. Where should I do this?
Is it the classes themselves? This seems like it can lead to a lot of code duplication.
Is it the utilities/engine which uses these classes? If so, an uninformed user may call the class function directly and not receive any caching benefit.
Example code
public interface ILetter { string[] GetAnimalsThatStartWithMe(); }
public class A : ILetter { public string[] GetAnimalsThatStartWithMe()
{
return new [] { "Aardvark", "Ant" };
}
}
public class B : ILetter { public string[] GetAnimalsThatStartWithMe()
{
return new [] { "Baboon", "Banshee" };
}
}
/* ...Left to user to define... */
public class Z : ILetter { public string[] GetAnimalsThatStartWithMe()
{
return new [] { "Zebra" };
}
}
public static class LetterUtility
{
public static string[] GetAnimalsThatStartWithLetter(char letter)
{
if(letter == 'A') return (new A()).GetAnimalsThatStartWithMe();
if(letter == 'B') return (new B()).GetAnimalsThatStartWithMe();
/* ... */
if(letter == 'Z') return (new Z()).GetAnimalsThatStartWithMe();
throw new ApplicationException("Letter " + letter + " not found");
}
}
Should LetterUtility be responsible for caching? Should each individual instance of ILetter? Is there something else entirely that can be done?
I'm trying to keep this example short, so these example functions don't need caching. But consider I add this class that makes (new C()).GetAnimalsThatStartWithMe() take 10 seconds every time it's run:
public class C : ILetter
{
public string[] GetAnimalsThatStartWithMe()
{
Thread.Sleep(10000);
return new [] { "Cat", "Capybara", "Clam" };
}
}
I find myself battling between making our software as fast as possible and maintaining less code (in this example: caching the result in LetterUtility) and doing the exact same work over and over (in this example: waiting 10 seconds every time C is used).
Which layer is best responsible for caching of the results of these user-definable functions?
The answer is pretty obvious: the layer that can correctly implement the desired cache policy is the right layer.
A correct cache policy needs to have two characteristics:
It must never serve up stale data; it must know whether the method being cached is going to produce a different result, and invalidate the cache at some point before the caller would get stale data
It must manage cached resources efficiently on the user's behalf. A cache without an expiration policy that grows without bounds has another name: we usually call them "memory leaks".
What's the layer in your system that knows the answers to the questions "is the cache stale?" and "is the cache too big?" That's the layer that should implement the cache.
Something like caching can be considered a "cross-cutting" concern (http://en.wikipedia.org/wiki/Cross-cutting_concern):
In computer science, cross-cutting concerns are aspects of a program which affect other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and can result in either scattering (code duplication), tangling (significant dependencies between systems), or both.
For instance, if writing an application for handling medical records, the bookkeeping and indexing of such records is a core concern, while logging a history of changes to the record database or user database, or an authentication system, would be cross-cutting concerns since they touch more parts of the program.
Cross cutting concerns can often be implemented via Aspect Oriented Programming (http://en.wikipedia.org/wiki/Aspect-oriented_programming).
In computing, aspect-oriented programming (AOP) is a programming paradigm which aims to increase modularity by allowing the separation of cross-cutting concerns. AOP forms a basis for aspect-oriented software development.
There are many tools in .NET to facilitate Aspect Oriented Programming. I'm most fond of those that provide completely transparent implementation. In the example of caching:
public class Foo
{
[Cache(10)] // cache for 10 minutes
public virtual void Bar() { ... }
}
That's all you need to do...everything else happens automatically by defining a behavior like so:
public class CachingBehavior
{
public void Intercept(IInvocation invocation) { ... }
// this method intercepts any method invocations on methods attributed with the [Cache] attribute.
// In the case of caching, this method would check if some cache store contains the data, and if it does return it...else perform the normal method operation and store the result
}
There are two general schools for how this happens:
Post build IL weaving. Tools like PostSharp, Microsoft CCI, and Mono Cecil can be configured to automatically rewrite these attributed methods to automatically delegate to your behaviors.
Runtime proxies. Tools like Castle DynamicProxy and Microsoft Unity can automatically generate proxy types (a type derived from Foo that overrides Bar in the example above) that delegates to your behavior.
Although I do not know C#, this seems like a case for using AOP (Aspect-Oriented Programming). The idea is that you can 'inject' code to be executed at certain points in the execution stack.
You can add the caching code as follows:
IF( InCache( object, method, method_arguments ) )
RETURN Cache(object, method, method_arguments);
ELSE
ExecuteMethod(); StoreResultsInCache();
You then define that this code should be executed before every call of your interface functions (and all subclasses implementing these functions as well).
Can some .NET expert enlighten us how you would do this in .NET ?
In general, caching and memoisation makes sense when:
Obtaining the result is (or at least can be) high-latency or otherwise expensive than the expense caused by caching itself.
The results have a look-up pattern where there will be frequent calls with the same inputs to the function (that is, not just the arguments but any instance, static and other data that affects the result).
There isn't an already existing caching mechanism within the code the code in question calls into that makes this unnecessary.
There won't be another caching mechanism within the code that calls the code in question that makes this unnecessary (why it almost never makes sense to memoise GetHashCode() within that method, despite people often being tempted to when the implementation is relatively expensive).
Is impossible to become stale, unlikely to become stale while the cache is loaded, unimportant if it becomes stale, or where staleness is easy to detect.
There are cases where every use-case for a component will match all of these. There are many more where they will not. For example, if a component caches results but is never called twice with the same inputs by a particular client component, then that caching is just a waste that has had a negative impact upon performance (maybe negligible, maybe severe).
More often it makes much more sense for the client code to decide upon the caching policy that would suit it. It will also often be easier to tweak for a particular use at this point in the face of real-world data than in the component (since the real-world data it'll face could vary considerably from use to use).
It's even harder to know what degree of staleness could be acceptable. Generally, a component has to assume that 100% freshness is required from it, while the client component can know that a certain amount of staleness will be fine.
On the other hand, it can be easier for a component to obtain information that is of use to the cache. Components can work hand-in-hand in these cases, though it is much more involved (an example would be the If-Modified-Since mechanism used by RESTful webservices, where a server can indicate that a client can safely use information it has cached).
Also, a component can have a configurable caching policy. Connection pooling is a caching policy of sorts, consider how that's configurable.
So in summary:
The component that can work out what caching is both possible and useful.
Which is most often the client code. Though having details of likely latency and staleness documented by the component's authors will help here.
Can less often be the client code with help from the component, though you have to expose details of the caching to allow that.
And can sometimes be the component with the caching policy configurable by the calling code.
Can only rarely be the component, because it's rarer for all possible use-cases to be served well by the same caching policy. One important exception is where the same instance of that component will serve multiple clients, because then the factors that affect the above are spread over those multiple clients.
All of the previous posts brought up some good points, here is a very rough outline of a way you might do it. I wrote this up on the fly so it might need some tweaking:
interface IMemoizer<T, R>
{
bool IsValid(T args); //Is the cache valid, or stale, etc.
bool TryLookup(T args, out R result);
void StoreResult(T args, R result);
}
static IMemoizerExtensions
{
Func<T, R> Memoizing<T, R>(this IMemoizer src, Func<T, R> method)
{
return new Func<T, R>(args =>
{
R result;
if (src.TryLookup(args, result) && src.IsValid(args))
{
return result;
}
else
{
result = method.Invoke(args);
memoizer.StoreResult(args, result);
return result;
}
});
}
}

Is it a code smell for one method to depend on another?

I am refactoring a class so that the code is testable (using NUnit and RhinoMocks as testing and isolations frameworks) and have found that I have found myself with a method is dependent on another (i.e. it depends on something which is created by that other method). Something like the following:
public class Impersonator
{
private ImpersonationContext _context;
public void Impersonate()
{
...
_context = GetContext();
...
}
public void UndoImpersonation()
{
if (_context != null)
_someDepend.Undo();
}
}
Which means that to test UndoImpersonation, I need to set it up by calling Impersonate (Impersonate already has several unit tests to verify its behaviour). This smells bad to me but in some sense it makes sense from the point of view of the code that calls into this class:
public void ExerciseClassToTest(Impersonator c)
{
try
{
if (NeedImpersonation())
{
c.Impersonate();
}
...
}
finally
{
c.UndoImpersonation();
}
}
I wouldn't have worked this out if I didn't try to write a unit test for UndoImpersonation and found myself having to set up the test by calling the other public method. So, is this a bad smell and if so how can I work around it?
Code smell has got to be one of the most vague terms I have ever encountered in the programming world. For a group of people that pride themselves on engineering principles, it ranks right up there in terms of unmeasurable rubbish, and about as useless a measure, as LOCs per day for programmer efficiency.
Anyway, that's my rant, thanks for listening :-)
To answer your specific question, I don't believe this is a problem. If you test something that has pre-conditions, you need to ensure the pre-conditions have been set up first for the given test case.
One of the tests should be what happens when you call it without first setting up the pre-conditions - it should either fail gracefully or set up it's own pre-condition if the caller hasn't bothered to do so.
Well, there is a bit too little context to tell, it looks like _someDepend should be initalized in the constructor.
Initializing fields in an instance method is a big NO for me. A class should be fully usable (i.e. all methods work) as soon as it is constructed; so the constructor(s) should initialize all instance variables. See e.g. the page on single step construction in Ward Cunningham's wiki.
The reason initializing fields in an instance method is bad is mainly that it imposes an implicit ordering on how you can call methods. In your case, TheMethodIWantToTest will do different things depending on whether DoStuff was called first. This is generally not something a user of your class would expect, so it's bad :-(.
That said, sometimes this kind of coupling may be unavoidable (e.g. if one method acquires a resource such as a file handle, and another method is needed to release it). But even that should be handled within one method if possible.
What applies to your case is hard to tell without more context.
Provided you don't consider mutable objects a code smell by themselves, having to put an object into the state needed for a test is simply part of the set-up for that test.
This is often unavoidable, for instance when working with remote connections - you have to call Open() before you can call Close(), and you don't want Open() to automatically happen in the constructor.
However you want to be very careful when doing this that the pattern is something readily understood - for instance I think most users accept this kind of behaviour for anything transactional, but might be surprised when they encounter DoStuff() and TheMethodIWantToTest() (whatever they're really called).
It's normally best practice to have a property that represents the current state - again look at remote or DB connections for an example of a consistently understood design.
The big no-no is for this to ever happen for properties. Properties should never care what order they are called in. If you have a simple value that does depend on the order of methods then it should be a parameterless method instead of a property-get.
Yes, I think there is a code smell in this case. Not because of dependencies between methods, but because of the vague identity of the object. Rather than having an Impersonator which can be in different persona states, why not have an immutable Persona?
If you need a different Persona, just create a new one rather than changing the state of an existing object. If you need to do some cleanup afterwards, make Persona disposable. You can keep the Impersonator class as a factory:
using (var persona = impersonator.createPersona(...))
{
// do something with the persona
}
To answer the title: having methods call each other (chaining) is unavoidable in object oriented programming, so in my view there is nothing wrong with testing a method that calls another. A unit test can be a class after all, it's a "unit" you're testing.
The level of chaining depends on the design of your object - you can either fork or cascade.
Forking:
classToTest1.SomeDependency.DoSomething()
Cascading:
classToTest1.DoSomething() (which internally would call SomeDependency.DoSomething)
But as others have mentioned, definitely keep your state initialisation in the constructor which from what I can tell, will probably solve your issue.

Is this code thread-safe? How can I make it thread-safe?

I have a WCF service with a security class for getting some of the attributes of the calling user. However I'm quite bad when it comes to thread safety - to this point, I haven't needed to do much with it, and only have a rudimentary theoretical understanding of the problems of multi-threading.
Given the following function:
public class SecurityService
{
public static Guid GetCurrentUserID()
{
if (Thread.CurrentPrincipal is MyCustomPrincipal)
{
MyCustomIdentity identity = null;
MyCustomPrincipal principal = (MyCustomPrincipal)Thread.CurrentPrincipal;
if (principal != null)
{
identity = (MyCustomIdentity)principal.Identity;
}
if (identity != null)
{
return identity.UUID;
}
}
return Guid.Empty;
}
}
Is there any chance that something could go wrong in there if the method is being called at the same time from 2 different threads? In my nightmares I see terrible consequences if these methods go wrong, like someone accidentally getting someone else's data or suddenly becoming a system administrator. A colleague (who also he was not an expert, but he's better than me) thought it would probably be okay because there's not really any shared resources that are being accessed there.
Or this one, which will access the database - could this go awry?
public static User GetCurrentUser()
{
var uuid = GetCurrentUserID();
if (uuid != null)
{
var rUser = new UserRepository();
return rUser.GetByID(uuid);
}
return null;
}
There's a lot of discussion about the principals of threading, but I tend to fall down and get confused when it comes to actually applying it, and knowing when to apply it. Any help appreciated.
I can explain more about the context/purpose of these functions if it's not clear.
EDIT: The rUser.GetByID() function basically calls through to a repository that looks up the database using NHibernate. So I guess the database here is a "shared resource", but not really one that gets locked or modified for this operation... in which case I guess it's okay...?
From what I see, the first example only accesses thread-local storage and stack-based variables, while the second one only accesses stack-based variables.
Both should be thread-safe.
I can't tell if GetByID is thread safe or not. Look to see if it accesses any shared/static resources. If it does, it's not thread-safe without some additional code to protect those resources.
The code that you have above doesn't contain any code that changes global state, therefore you can be fairly sure that it won't be a problem being called by multiple simlutaneous threads. Security principal information is tied to each thread, so no problem there either.

Categories

Resources