I'm wondering if there are any differences between MemoryCache and HttpRuntime.Cache, which one is preferred in ASP.NET MVC projects?
As far as I understand, both are thread safe, API is from first sight more or less the same, so is there any difference when to use which?
HttpRuntime.Cache gets the Cache for the current application.
The MemoryCache class is similar to the ASP.NET Cache class.
The MemoryCache class has many properties and methods for accessing the cache that will be familiar to you if you have used the ASP.NET Cache class.
The main difference between HttpRuntime.Cache and MemoryCache is that the latter has been changed to make it usable by .NET Framework applications that are not ASP.NET applications.
For additional reading:
Justin Mathew Blog - Caching in .Net 4.0
Jon Davis Blog - Four Methods Of Simple Caching In .NET
Update :
According to the users feedback, sometimes Jon davis blog is not working.Hence I have put the whole article as an image.Please see that.
Note : If it's not clear then just click on the image.After that it'll open on a browser.Then click again on it to zoom :)
Here is Jon Davis' article. To preserve readability, I'm cutting out the now obsolete EntLib section, the intro as well as the conclusion.
ASP.NET Cache
ASP.NET, or the System.Web.dll assembly, does have a caching mechanism. It was never intended to be used outside of a web context, but it can be used outside of the web, and it does perform all of the above expiration behaviors in a hashtable of sorts.
After scouring Google, it appears that quite a few people who have discussed the built-in caching functionality in .NET have resorted to using the ASP.NET cache in their non-web projects. This is no longer the most-available, most-supported built-in caching system in .NET; .NET 4 has an ObjectCache which I’ll get into later. Microsoft has always been adamant that the ASP.NET cache is not intended for use outside of the web. But many people are still stuck in .NET 2.0 and .NET 3.5, and need something to work with, and this happens to work for many people, even though MSDN says clearly:
Note: The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly.
The class for the ASP.NET cache is System.Web.Caching.Cache in System.Web.dll. However, you cannot simply new-up a Cache object. You must acquire it from System.Web.HttpRuntime.Cache.
Cache cache = System.Web.HttpRuntime.Cache;
Working with the ASP.NET cache is documented on MSDN here.
Pros:
It’s built-in.
Despite the .NET 1.0 syntax, it’s fairly simple to use.
When used in a web context, it’s well-tested. Outside of web contexts, according to Google searches it is not commonly known to cause problems, despite Microsoft recommending against it, so long as you’re using .NET 2.0 or later.
You can be notified via a delegate when an item is removed, which is necessary if you need to keep it alive and you could not set the item’s priority in advance.
Individual items have the flexibility of any of (a), (b), or (c) methods of expiration and removal in the list of removal methods at the top of this article. You can also associate expiration behavior with the presence of a physical file.
Cons:
Not only is it static, there is only one. You cannot create your own type with its own static instance of a Cache. You can only have one bucket for your entire app, period. You can wrap the bucket with your own wrappers that do things like pre-inject prefixes in the keys and remove these prefixes when you pull the key/value pairs back out. But there is still only one bucket. Everything is lumped together. This can be a real nuisance if, for example, you have a service that needs to cache three or four different kinds of data separately. This shouldn’t be a big problem for pathetically simple projects. But if a project has any significant degree of complexity due to its requirements, the ASP.NET cache will typically not suffice.
Items can disappear, willy-nilly. A lot of people aren’t aware of this—I wasn’t, until I refreshed my knowledge on this cache implementation. By default, the ASP.NET cache is designed to destroy items when it “feels” like it. More specifically, see (c) in my definition of a cache table at the top of this article. If another thread in the same process is working on something completely different, and it dumps high-priority items into the cache, then as soon as .NET decides it needs to require some memory it will start to destroy some items in the cache according to their priorities, lower priorities first. All of the examples documented here for adding cache items use the default priority, rather than the NotRemovable priority value which keeps it from being removed for memory-clearing purposes but will still remove it according to the expiration policy. Peppering CacheItemPriority.NotRemovable in cache invocations can be cumbersome, otherwise a wrapper is necessary.
The key must be a string. If, for example, you are caching data records where the records are keyed on a long or an integer, you must convert the key to a string first.
The syntax is stale. It’s .NET 1.0 syntax, even uglier than ArrayList or Hashtable. There are no generics here, no IDictionary<> interface. It has no Contains() method, no Keys collection, no standard events; it only has a Get() method plus an indexer that does the same thing as Get(), returning null if there is no match, plus Add(), Insert() (redundant?), Remove(), and GetEnumerator().
Ignores the DRY principle of setting up your default expiration/removal behaviors so you can forget about them. You have to explicitly tell the cache how you want the item you’re adding to expire or be removed every time you add add an item.
No way to access the caching details of a cached item such as the timestamp of when it was added. Encapsulation went a bit overboard here, making it difficult to use the cache when in code you’re attempting to determine whether a cached item should be invalidated against another caching mechanism (i.e. session collection) or not.
Removal events are not exposed as events and must be tracked at the time of add.
And if I haven’t said it enough, Microsoft explicitly recommends against it outside of the web. And if you’re cursed with .NET 1.1, you not supposed to use it with any confidence of stability at all outside of the web so don’t bother.
.NET 4.0’s ObjectCache / MemoryCache
Microsoft finally implemented an abstract ObjectCache class in the latest version of the .NET Framework, and a MemoryCache implementation that inherits and implements ObjectCache for in-memory purposes in a non-web setting.
System.Runtime.Caching.ObjectCache is in the System.Runtime.Caching.dll assembly. It is an abstract class that that declares basically the same .NET 1.0 style interfaces that are found in the ASP.NET cache. System.Runtime.Caching.MemoryCache is the in-memory implementation of ObjectCache and is very similar to the ASP.NET cache, with a few changes.
To add an item with a sliding expiration, your code would look something like this:
var config = new NameValueCollection();
var cache = new MemoryCache("myMemCache", config);
cache.Add(new CacheItem("a", "b"),
new CacheItemPolicy
{
Priority = CacheItemPriority.NotRemovable,
SlidingExpiration=TimeSpan.FromMinutes(30)
});
Pros:
It’s built-in, and now supported and recommended by Microsoft outside of the web.
Unlike the ASP.NET cache, you can instantiate a MemoryCache object instance.
Note: It doesn’t have to be static, but it should be—that is Microsoft’s recommendation (see yellow Caution).
A few slight improvements have been made vs. the ASP.NET cache’s interface, such as the ability to subscribe to removal events without necessarily being there when the items were added, the redundant Insert() was removed, items can be added with a CacheItem object with an initializer that defines the caching strategy, and Contains() was added.
Cons:
Still does not fully reinforce DRY. From my small amount of experience, you still can’t set the sliding expiration TimeSpan once and forget about it. And frankly, although the policy in the item-add sample above is more readable, it necessitates horrific verbosity.
It is still not generically-keyed; it requires a string as the key. So you can’t store as long or int if you’re caching data records, unless you convert to string.
DIY: Build One Yourself
It’s actually pretty simple to create a caching dictionary that performs explicit or sliding expiration. (It gets a lot harder if you want items to be auto-removed for memory-clearing purposes.) Here’s all you have to do:
Create a value container class called something like Expiring or Expirable that would contain a value of type T, a TimeStamp property of type DateTime to store when the value was added to the cache, and a TimeSpan that would indicate how far out from the timestamp that the item should expire. For explicit expiration you can just expose a property setter that sets the TimeSpan given a date subtracted by the timestamp.
Create a class, let’s call it ExpirableItemsDictionary, that implements IDictionary. I prefer to make it a generic class with defined by the consumer.
In the the class created in #2, add a Dictionary> as a property and call it InnerDictionary.
The implementation if IDictionary in the class created in #2 should use the InnerDictionary to store cached items. Encapsulation would hide the caching method details via instances of the type created in #1 above.
Make sure the indexer (this[]), ContainsKey(), etc., are careful to clear out expired items and remove the expired items before returning a value. Return null in getters if the item was removed.
Use thread locks on all getters, setters, ContainsKey(), and particularly when clearing the expired items.
Raise an event whenever an item gets removed due to expiration.
Add a System.Threading.Timer instance and rig it during initialization to auto-remove expired items every 15 seconds. This is the same behavior as the ASP.NET cache.
You may want to add an AddOrUpdate() routine that pushes out the sliding expiration by replacing the timestamp on the item’s container (Expiring instance) if it already exists.
Microsoft has to support its original designs because its user base has built up a dependency upon them, but that does not mean that they are good designs.
Pros:
You have complete control over the implementation.
Can reinforce DRY by setting up default caching behaviors and then just dropping key/value pairs in without declaring the caching details each time you add an item.
Can implement modern interfaces, namely IDictionary<K,T>. This makes it much easier to consume as its interface is more predictable as a dictionary interface, plus it makes it more accessible to helpers and extension methods that work with IDictionary<>.
Caching details can be unencapsulated, such as by exposing your InnerDictionary via a public read-only property, allowing you to write explicit unit tests against your caching strategy as well as extend your basic caching implementation with additional caching strategies that build upon it.
Although it is not necessarily a familiar interface for those who already made themselves comfortable with the .NET 1.0 style syntax of the ASP.NET cache or the Caching Application Block, you can define the interface to look like however you want it to look.
Can use any type for keys. This is one reason why generics were created. Not everything should be keyed with a string.
Cons:
Is not invented by, nor endorsed by, Microsoft, so it is not going to have the same quality assurance.
Assuming only the instructions I described above are implemented, does not “willy-nilly” clear items for clearing memory on a priority basis (which is a corner-case utility function of a cache anyway .. BUY RAM where you would be using the cache, RAM is cheap).
Among all four of these options, this is my preference. I have implemented this basic caching solution. So far, it seems to work perfectly, there are no known bugs (please contact me with comments below or at jon-at-jondavis if there are!!), and I intend to use it in all of my smaller side projects that need basic caching. Here it is:
Github link: https://github.com/kroimon/ExpirableItemDictionary
Old Link: ExpirableItemDictionary.zip
Worthy Of Mention: AppFabric, NoSQL, Et Al
Notice that the title of this blog article indicates “Simple Caching”, not “Heavy-Duty Caching”. If you want to get into the heavy-duty stuff, you should look at dedicated, scale out solutions.
MemoryCache.Default can also serve as a "bridge" if you're migrating a classic ASP.NET MVC app to ASP.NET Core, because there's no "System.Web.Caching" and "HttpRuntime" in Core.
I also wrote a small benchmark to store a bool item 20000 times (and another benchmark to retrieve it) and MemoryCache seems to be two times slower (27ms vs 13ms - that's total for all 20k iterations) but they're both super-fast and this can probably be ignored.
MemoryCache is what it says it is, a cache stored in memory
HttpRuntime.Cache (see http://msdn.microsoft.com/en-us/library/system.web.httpruntime.cache(v=vs.100).aspx and http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx) persists to whatever you configure it to in your application.
see for example "ASP.NET 4.0: Writing custom output cache providers"
http://weblogs.asp.net/gunnarpeipman/archive/2009/11/19/asp-net-4-0-writing-custom-output-cache-providers.aspx
Related
The current advice for using an Entity Manager in an ASP.NET request seems to be to just set the AuthorizedThreadID property to NULL (reference 1 and 2). While that works, it seems like that is turning off a very important 'safety net'. While I try very hard to use the Entity Manager in a thread-safe way, it is still nice to have that safety net in case I get it wrong...so I'd rather not have to just set it to NULL.
In the ASP.NET world, there is still roughly a single thread of execution - it's just that the actual thread can change when you are doing async work. I can think of a few possible solutions:
The EntityManager.SafeThreadingCheck() method does some kind of extra magic to support ASP.NET requests. I can understand that IdeaBlade might not want to do this...which leads me to the second option...
EntityManager provides some extensibility points for me to provide my own version of SafeThreadingCheck() where I can implement the special magic for verifying that the entity manager is still 'logically' on the request thread. I might have to do some weird stuff here but I don't think it would be terribly complicated.
I try to use other extensibility points to detect when I should call my own FancySafeThreadingCheck() method. This has the downside that I have to try to hook into all the necessary places whereas the existing SafeThreadingCheck() method is already being called (presumably) from all the necessary places at exactly the best time.
I can understand this might not be the highest priority feature request but it also seems like it might not be too hard (at least for option #2). Or perhaps there are some other workarounds that may be better? My end goal is to avoid turning off this safety check...I'm open to options to get there.
DevForce 7.2.4 will include the feature to set a delegate on the EntityManager to provide custom authorized thread ID logic.
I'm currently developing a C# MVC REST web api, and am trying to choose between one of two possibilities for our design.
Without getting too deep into our design, we intend to have a class for data access, which we'll call DataSource. Each DataSource will need to execute small, contained blocks of logic to correctly build an appropriate response. Due to a desire to be able to hot-load code in the future, we don't want to simply have these be functions on DataSource, instead we'd like them to be provided by other assemblies. We have a proof of concept of this implemented, and so far, so good.
What I'm trying to decide between is writing a static class with a single static ExecuteQuery function, or writing a factory method to create instances of these classes, which have an instance method called ExecuteQuery.
What are the performance considerations between creating multiple, short-lived objects every request, vs calling static methods?
Intuitively, static methods would be faster, but I'm already expecting that I'll run into a bit of a headache calling them through reflection (to support the hot-loaded code requirement).
If there's not a huge penalty for short-lived objects, then they might win out on simplicity alone.
Relevant information about our expect loads:
Response times in the 300ms - 800ms range
Average load of about 2000 web clients
Peak load of about 4000 clients
Clients doing queries every 2 - 5 seconds
Client peak rate of 1 query every second
Also, each DataSource would create a maximum of 8, average of 3 of these instances.
Use a static class, that delegates the calls to the implementation classes.
These implementation classes should implement a common interface, which will allow you to call methods on them without needing reflection. And of course, static methods can't implement interface methods. Interface implementations have to be instances, you need some kind of factory to instantiate them. If they live in external assemblies, I strongly suggest you take a look at the Managed Extensibility Framework (MEF), see http://msdn.microsoft.com/en-us/library/dd460648.aspx.
What are the performance considerations between creating multiple, short-lived objects every request, vs calling static methods? Given that these methods will do data access, the performance implications are totally completely and utterly negligable.
If you use MEF, the framework will create singleton-like instances for you.
If you role your own, and want to remove the need to create these objects multiple times, you can implement the Singleton pattern on them.
I assume each DataSource instance will make a new connection to the database. If so, then it would make sense to only have one instance. The only way to find out if "it's a huge penalty" is to create a mock-up of both solutions and profile and see if the impact is significant or not.
Since you do not seem to have many clients at once, so this also sits
will with the Singleton pattern.
Not many concurrent queries(mostly because of above statement).
You have a Defined specification of response time.
The only argument I can make for the Factory pattern is for "simplicity". If the project is truly time sensitive, then I guess you have no choice. But if you really want performance, go with Singleton.
Use MEF. No need to invent your own plugin framework. It will steer you towards instance methods exposed via interfaces. It is not uncommon to create per-request objects... the MVC framework does that all over the place. Per-request instances are a particularly good thing for data access scenarios so that you can support things like transactions/rollbacks in a way where one user's experience doesn't impact other users until you explicitly make it. Use response caching where appropriate if perf is a problem.
The main decision should be "does this object have state?"
If "No", then by all means, make it a static method.
IMHO .. PSM
Here I need to cache some entites, for example, a Page Tree in a content management system (CMS). The system allows developers to write plugins, in which they can access the cached page tree. Is it good or bad to make the cached page tree mutable (i.e., there are setters for the tree node objects, and/or we expose the Add, Remove method in the ChildPages collection. So the client code can set properties of the page tree nodes, and add/remove tree nodes freely)?
Here's my opinions:
(1) If the page tree is immutable, the plugin developers has no way to modify the tree unexpected. That way we can avoid some subtle bugs.
(2) But sometimes we need to change the name of a page. If the page tree is immutable, we should invoke some method like "Refresh()" to refresh the cache. This will cause a database hit(so totally two database hits, but we should have avoided 1 of the 2 database hit). In this case, if the page tree is mutable, we can directly change the name in the page tree to make the tree up to date (so only 1 database hit is needed).
What do you think about it? And what will you do if you encounter such a situation?
Thanks in advance! :)
UPDATE: The page tree is something like:
public class PageCacheItem {
public string Name { get; set; }
public string PageTitle { get; set; }
public PageCacheItemCollection Children { get; private set; }
}
My problem here is not about the hashcode, because the PageCacheItem won't be put on a hashset or dictionary as keys.
My prolbem is:
If the PageCacheItem (the tree node) is mutable, that is, there are setters for properties(e.g., has setter for Name, PageTitle property). If some plugin authors change the properties of the PageCacheItem by mistake, the system will be in a incorrect state (that cached data is not consistent with the data in the database), and this bug is hard to debug, because it's caused by some plugin, not the system itself.
But if the PageCacheItem is readonly, it might be hard to implement efficient "cache refresh" functionality, because there are no setters for the properties, we can't simply update the properties by setting them to the latest values.
UPDATE2
Thanks guys. But I have one thing to note, that is, I'm not going to develop a generic caching framework, but develop some APIs on top of an exsiting caching framework. So my APIs is a middle layer between the underlying caching framework and the plugin authors. The plugin author doesn't need to know anything about the underlying caching framework. He only need to know this page tree is retrieved from cache. And he gets strongly-typed PageCacheItem APIs to use, not the weak-typed "object" retrieved from the underlying caching framework.
So my questions is about designing APIs for plugin authors, that is, is it good or bad to make the API class PageCacheItem mutable (here mutable == properties can be set outside the PageCacheItem class)?
First, I assume you mean the cached values may or may not be mutable, rather than the identifier it is identified by. If you mean the identifier too, then I would be quite emphatic about being immutable in this regard (emphatic enough to have my post flagged for obscene language).
As for mutable values, there is no one right answer here. You've hit on the primary pro and con either way, and there are multiple variants within each of the two options you describe. Cache invalidation is in general a notoriously difficult problem (as in the well known quote from Phil Karlton, "There are only two hard problems in Computer Science: cache invalidation and naming things."*)
Some things to consider:
How often will changes be made. If changes are rare, refreshes become easy - dump the existing cache and let it rebuild.
Will the CMS be on multiple servers, or could it in the future, as this means that any invalidation information has to be shared.
How bad is stale data, and how soon is it bad (could you happily server out of date values for the next hour or so, or would this conflict disastrously with fresh values).
Does a revalidation approach make sense for you, where after a certain time a cached value is checked to be sure it is still valid, and the time-to-next-check is updated (alternatively, periodically dump old values and let them be retrieved from the fresh source again).
How easy is detecting staleness in the first place? If its hard this can rule out some approaches.
How much does the cache actually save. Could you just get rid of it?
I haven't mentioned threading issues, because the threading issues are difficult with any sort of cache unless you're single-threaded (and if its a CMS I'm guessing it's web, and hence inherently multi-threaded). One thing I'll will say on the matter is that it's generally the case that a cache failure isn't critical (by definition, cache failure has a fallback - get the fresh value) for this reason it can be fruitful to take an approach where rather than blocking indefinitely on the monitor (which is what lock does internally) you use Montior.TryEnter with a timeout, and have the cache operation fail if the timeout is hit. Using a ReaderWriterLockSlim and allowing a slightly longer timeout for writing can be a good approach. This way if you get a point of heavy lock contention then the cache will stop working for some threads, but those threads still get usable data. This will suck for performance for those threads, but not as much as lock contention would cause for all affected threads, and caches are a place where it is very easy to introduce lock contention into a web project that only hits once you've gone live.
*(and of course the well known variant, "there are only two hard problems in Computer Science: cache invalidation, naming things, and off-by-one errors").
Look at it this way, if the entry is mutable, then it is likely that the hashcode will change when the object is mutated.
Depending on the dictionary implementation of the cache, it could either:
be 'lost'
in worst case the entire cache will need to be rehashed
There may be valid reasons why you want 'mutable hashcodes' but I cannot see a justification here. (I have only ever needed to do this once in the last 9 years).
It would be a lot easier just to remove and replace the entry you wish to be 'mutated'.
We have a server written in C# (Framework 3.5 SP1). Customers write client applications using our server API. Recently, we created several levels of license schemes like Basic, Intermediate and All. If you have Basic license then you can call few methods on our API. Similarly if you have Intermediate you get some extra methods to call and if you have All then you can call all the methods.
When server starts it gets the license type. Now in each method I have to check the type of license and decide whether to proceed further with the function or return.
For example, a method InterMediateMethod() can only be used by Intermediate License and All license. So I have to something like this.
public void InterMediateMethod()
{
if(licenseType == "Basic")
{
throw new Exception("Access denied");
}
}
It looks like to me that it is very lame approach. Is there any better way to do this? Is there any declarative way to do this by defining some custom attributes? I looked at creating a custom CodeAccessSecurityAttribute but did not get a good success.
Since you are adding the "if" logic in every method (and god knows what else), you might find it easier to use PostSharp (AOP framework) to achieve the same, but personally, I don't like either of the approaches...
I think it would be much cleaner if you'd maintained three different branches (source code) for each license, which may add a little bit of overhead in terms of maintenance (maybe not), but at least keep it clean and simple.
I'm also interested what others have to say about it.
Good post, I like it...
Possibly one easy and clean approach would be to add a proxy API that duplicates all your API methods and exposes them to the client. When called, the proxy would either forward the call to your real method, or return a "not licensed" error. The proxies could be built into three separate (basic, intermediate, all) classes, and your server would create instances of the approprate proxy for your client's licence. This has the advantage of having minimal performance overhead (because you only check the licence once). You may not even need to use a proxy for the "all" level, so it'll get maximum performance. It may be hard to slip this in depending on your existing design though.
Another possibility may be to redesign and break up your APIs into basic/intermediate/all "sections", and put them in separate assemblies, so the entire assembly can be enabled/disabled by the licence, and attempting to call an unlicensed method can just return a "method not found" error (e.g. a TypeLoadException will occur automatically if you simply haven't loaded the needed assembly). This will make it much easier to test and maintain, and again avoids checking at the per-method level.
If you can't do this, at least try to use a more centralised system than an "if" statement hand-written into every method.
Examples (which may or may not be compatible with your existing design) would include:
Add a custom attribute to each method and have the server dispatch code check this attribute using reflection before it passes the call into the method.
Add a custom attribute to mark the method, and use PostSharp to inject a standard bit of code into the method that will read and test the attribute against the licence.
Use PostSharp to add code to test the licence, but put the licence details for each method in a more data driven system (e.g. use an XML file rather than attributes to describe the method permissions). This will allow you to easily change the licensing across the entire server by editing a single file, and allow you to easily add whole new levels or types of licences in future.
Hope that gives you some ideas.
You might really want to consider buying a licensing solution rather than rolling your own. We use Desaware and are pretty happy with it.
Doing licensing at the method level is going to take you into a world of hurt. Maintenance on that would be a nightmare, and it won't scale at all.
You should really look at componentizing your product. Your code should roughly fall into "features", which can be bundled into "components". The trick is to make each component do a license check, and have a licensing solution that knows if a license includes a component.
Components for our products are generally on the assembly level, but for our web products they can get down to the ASP.Net server control level.
I wonder how the people are licensing the SOA services. They can be licensed per service or per end point.
That can be very hard to maintain.
You can try with using strategy pattern.
This can be your starting point.
I agree with the answer from #Ostati that you should keep 3 branches of your code.
What I would further expand on that is then I would expose 3 different services (preferably WCF services) and issue certificates that grant access to the specific service. That way if anyone tried to access the higher level functionality they would just not be able to access the service period.
First off, I wish context based storage was consistent across the framework!
With that said, I'm looking for an elegant solution to make these properties safe across ASP.NET, WCF and any other multithreaded .NET code. The properties are located in some low-level tracing helpers (these are exposed via methods if you're wondering why they're internal).
I'd rather not have a dependency on unneeded assemblies (like System.Web, etc). I don't want to require anyone using this code to configure anything. I just want it to work ;) That may be too tall of an order though...
Anyone have any tricks up their sleeves? (I've seen Spring's implementation)
internal static string CurrentInstance
{
get
{
return CallContext.LogicalGetData(currentInstanceSlotName) as string;
}
set
{
CallContext.LogicalSetData(currentInstanceSlotName, value);
}
}
internal static Stack<ActivityState> AmbientActivityId
{
get
{
Stack<ActivityState> stack = CallContext.LogicalGetData(ambientActivityStateSlotName) as Stack<ActivityState>;
if (stack == null)
{
stack = new Stack<ActivityState>();
CallContext.LogicalSetData(ambientActivityStateSlotName, stack);
}
return stack;
}
}
Update
By safe I do not mean synchronized. Background on the issue here
Here is a link to (at least part of) NHibernate's "context" implementation:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Context/
It is not clear to me exactly where or how this comes into play in the context of NHibernate. That is, if I wanted to store some values in "the context" would I get "the context" from NHibernate and add my values? I don't use NHibernate, so I don't really know.
I suppose that you could look and determine for yourself if this kind of implementation would be useful to you. Apparently the idea would be to inject the desired implementation, depending on the type of application (ASP.NET, WCF, etc). That probably implies some configuration (maybe minimal if one were to use MEF to load "the" ICurrentSessionContext interface).
At any rate, I found this idea interesting when I found it some time ago while searching for information on CallContext.SetData/GetData/LogicalSetData/LogicalGetData, Thread.SetData/GetData, [ThreadStatic], etc.
Also, based on your use of CallContext.LogicalSetData rather than CallContext.SetData, I assume that you want to take advantage of the fact that information associated with the logical thread will be propagated to child threads as opposed to just wanting a "thread safe" place to store info. So, if you were to set (pr Push) the AmbientActivity in your app's startup and then not push any more activities, any subsequent threads would also be part of that same activity since data stored via LogicalSetData is inherited by child threads.
If you have learned anything in the meantime since you first asked this question I would be very interested in hearing about it. Even if you haven't, I would be interested in learning about what you are doing with the context.
At the moment, I am working on maintaining some context information for logging/tracing (similar to Trace.CorrelationManager.ActivityId and Trace.CorrelationManager.LogicalOpertionStack and log4net/NLog context support). I would like to save some context (current app, current app instance, current activity (maybe nested)) for use in an app or WCF service AND I want to propagate it "automatically" across WCF service boundaries. This is to allow logging statements logged in a central repository to be correlated by client/activity/etc. We would be able to query and correlate for all logging statements by a specific instance of a specific application. The logging statements could have been generated on the client or in one or more WCF services.
The WCF propagation of ActivityId is not necessarily sufficient for us because we want to propagate (or we think we do) more than just the ActivityId. Also, we want to propagate this information from Silverlight clients to WCF services and Trace.CorrelationManager is not available in Silverlight (at least not in 4.0, maybe something like it will be available in the future).
Currently I am prototyping the propagation of our "context" information using IClientMessageInspector and IDispatchMessageInspector. It looks like it will probably work ok for us.
Regarding a dependency on System.Web, the NHibernate implementation does have a "ReflectiveHttpContext" that uses reflection to access the HttpContext so there would not be a project dependency on System.Web. Obviously, System.Web would have to be available where the app is deployed if HttpContext is configured to be used.
I don't know that using CallContext is the right move here if the desire is simply to provide thread-safe access to your properties. If that is the case, the lock statement is all you need.
However, you have to make sure you are applying it correctly.
With CallContext, you are going to get thread-safe access because you are going to have separate instances of CallContext when calls come in on different threads (or different stores, rather). However, that's very different from making access to a resource thread-safe.
If you want to share the same value across multiple threads, then the lock statement is the way to go. Otherwise, if you want specific values on a per-thread/call basis, use the CallContext, or use the static GetData/SetData methods on the Thread class, or the ThreadStatic attribute (or any number of thread-based storage mechanisms).