Seeking One-Size-Fits-All Context Based Storage - c#

First off, I wish context based storage was consistent across the framework!
With that said, I'm looking for an elegant solution to make these properties safe across ASP.NET, WCF and any other multithreaded .NET code. The properties are located in some low-level tracing helpers (these are exposed via methods if you're wondering why they're internal).
I'd rather not have a dependency on unneeded assemblies (like System.Web, etc). I don't want to require anyone using this code to configure anything. I just want it to work ;) That may be too tall of an order though...
Anyone have any tricks up their sleeves? (I've seen Spring's implementation)
internal static string CurrentInstance
{
get
{
return CallContext.LogicalGetData(currentInstanceSlotName) as string;
}
set
{
CallContext.LogicalSetData(currentInstanceSlotName, value);
}
}
internal static Stack<ActivityState> AmbientActivityId
{
get
{
Stack<ActivityState> stack = CallContext.LogicalGetData(ambientActivityStateSlotName) as Stack<ActivityState>;
if (stack == null)
{
stack = new Stack<ActivityState>();
CallContext.LogicalSetData(ambientActivityStateSlotName, stack);
}
return stack;
}
}
Update
By safe I do not mean synchronized. Background on the issue here

Here is a link to (at least part of) NHibernate's "context" implementation:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Context/
It is not clear to me exactly where or how this comes into play in the context of NHibernate. That is, if I wanted to store some values in "the context" would I get "the context" from NHibernate and add my values? I don't use NHibernate, so I don't really know.
I suppose that you could look and determine for yourself if this kind of implementation would be useful to you. Apparently the idea would be to inject the desired implementation, depending on the type of application (ASP.NET, WCF, etc). That probably implies some configuration (maybe minimal if one were to use MEF to load "the" ICurrentSessionContext interface).
At any rate, I found this idea interesting when I found it some time ago while searching for information on CallContext.SetData/GetData/LogicalSetData/LogicalGetData, Thread.SetData/GetData, [ThreadStatic], etc.
Also, based on your use of CallContext.LogicalSetData rather than CallContext.SetData, I assume that you want to take advantage of the fact that information associated with the logical thread will be propagated to child threads as opposed to just wanting a "thread safe" place to store info. So, if you were to set (pr Push) the AmbientActivity in your app's startup and then not push any more activities, any subsequent threads would also be part of that same activity since data stored via LogicalSetData is inherited by child threads.
If you have learned anything in the meantime since you first asked this question I would be very interested in hearing about it. Even if you haven't, I would be interested in learning about what you are doing with the context.
At the moment, I am working on maintaining some context information for logging/tracing (similar to Trace.CorrelationManager.ActivityId and Trace.CorrelationManager.LogicalOpertionStack and log4net/NLog context support). I would like to save some context (current app, current app instance, current activity (maybe nested)) for use in an app or WCF service AND I want to propagate it "automatically" across WCF service boundaries. This is to allow logging statements logged in a central repository to be correlated by client/activity/etc. We would be able to query and correlate for all logging statements by a specific instance of a specific application. The logging statements could have been generated on the client or in one or more WCF services.
The WCF propagation of ActivityId is not necessarily sufficient for us because we want to propagate (or we think we do) more than just the ActivityId. Also, we want to propagate this information from Silverlight clients to WCF services and Trace.CorrelationManager is not available in Silverlight (at least not in 4.0, maybe something like it will be available in the future).
Currently I am prototyping the propagation of our "context" information using IClientMessageInspector and IDispatchMessageInspector. It looks like it will probably work ok for us.
Regarding a dependency on System.Web, the NHibernate implementation does have a "ReflectiveHttpContext" that uses reflection to access the HttpContext so there would not be a project dependency on System.Web. Obviously, System.Web would have to be available where the app is deployed if HttpContext is configured to be used.

I don't know that using CallContext is the right move here if the desire is simply to provide thread-safe access to your properties. If that is the case, the lock statement is all you need.
However, you have to make sure you are applying it correctly.
With CallContext, you are going to get thread-safe access because you are going to have separate instances of CallContext when calls come in on different threads (or different stores, rather). However, that's very different from making access to a resource thread-safe.
If you want to share the same value across multiple threads, then the lock statement is the way to go. Otherwise, if you want specific values on a per-thread/call basis, use the CallContext, or use the static GetData/SetData methods on the Thread class, or the ThreadStatic attribute (or any number of thread-based storage mechanisms).

Related

Better solution for using an Entity Manager in an ASP.NET request?

The current advice for using an Entity Manager in an ASP.NET request seems to be to just set the AuthorizedThreadID property to NULL (reference 1 and 2). While that works, it seems like that is turning off a very important 'safety net'. While I try very hard to use the Entity Manager in a thread-safe way, it is still nice to have that safety net in case I get it wrong...so I'd rather not have to just set it to NULL.
In the ASP.NET world, there is still roughly a single thread of execution - it's just that the actual thread can change when you are doing async work. I can think of a few possible solutions:
The EntityManager.SafeThreadingCheck() method does some kind of extra magic to support ASP.NET requests. I can understand that IdeaBlade might not want to do this...which leads me to the second option...
EntityManager provides some extensibility points for me to provide my own version of SafeThreadingCheck() where I can implement the special magic for verifying that the entity manager is still 'logically' on the request thread. I might have to do some weird stuff here but I don't think it would be terribly complicated.
I try to use other extensibility points to detect when I should call my own FancySafeThreadingCheck() method. This has the downside that I have to try to hook into all the necessary places whereas the existing SafeThreadingCheck() method is already being called (presumably) from all the necessary places at exactly the best time.
I can understand this might not be the highest priority feature request but it also seems like it might not be too hard (at least for option #2). Or perhaps there are some other workarounds that may be better? My end goal is to avoid turning off this safety check...I'm open to options to get there.
DevForce 7.2.4 will include the feature to set a delegate on the EntityManager to provide custom authorized thread ID logic.

Need help avoiding the use of a Singleton

I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.

How to explicitly and precisely control composition scoping?

I'm interested in ways to control composition scoping with MEF.
The most obvious example - web applications, where you have to create certain subset of components per request and dispose of them when the request is finished.
However, a general implementation of scoping may be useful in other contexts as well.
I'm looking at MEF2 preview and trying to make sense of it, but don't see a complete solution for some reason.
On one hand, there is this MVC integration module, where MEF is nice enough to take care of request scope for me, but that is not very usable outside of MVC (and outside of web for that matter), is it?
On the other hand, in the first preview-related post "What's new in MEF2", I've seen this thing called CompositionScopeDefinition that looks like an explicit specification for scopes, but with that one, I don't see a way to "close" the scope. To put it in other words: how does MEF determine when to dispose of components that were created within a scope?
And on third hand (yep :-), with MEF v1, I used to deal with scoping by creating nested CompositionContainers, but that doesn't work very well with custom ExportProviders.
What would really like to see is something like:
using( var scope = compositionContainer.OpenScope( /* some scope definition here */ ) )
{
var rootComponent = scope.GetExport<MyRootComponent>(); // The component graph gets composed at this point
rootComponent.DoYourScopedThing();
} // The component graph gets disposed at this point
If I had that thing, I could easily build MVC integration on top of it, but I could also use it in other contexts.
So, the question again: what do you use to deal with scoping problems like that? Or do you say MEF is not yet mature enough for serious use?
Good question- we are working on more documentation that should answer your question about CompositionScopeDefinition. Short version; CSD is used via an ExportFactory<T>, where CreateExport() returns a handle that is used to control the lifetime of the scope.
However, CSD is intended and optimized for desktop application scenarios; as you have no doubt seen, the MVC integration uses filtered catalogs and nested containers to control lifetime. This is still the recommended approach for 'transactional'-type lifetime in web and other work-processing scenarios.
it would be good to know more about the problems you face using custom ExportProviders wih this approach.
A stronger 'custom' lifetime story is something we're very much working towards; letting us know about where MEF 2 falls short for your scenarios, especially via the CodePlex discussion forum, is a great help.
I've found this post searching for details about CSD.
I want to use MEF to create extensible WPF application which has screen navigation that allow the client to open screen after screen inside a single window.
Each screen should have access to parts setup by previous screens and also have the ability to override some parts.
For example, when the user open a ProcessView it should have a ProcessProvider part which may be imported by screen navigated from the ProcessView, let's say ActivityView. The ActivityView should have access to the ProcessProvider so it will have context on which to operate.
Another example is that the root screen may have a ProcessListProvider which by default return all processes in the database. A screen that want to open the ProcessListView will need to somehow override the root ProcessListProvider with a customized ProcessListProvider so the ProcessListView will still work but with the customized process list provider.
I hope I was able to communicate my requirements.
Ido.

Extensibility framework/pattern/good practice for Web services?

I'm currently working on a large real-time OLAP application. All data are hold in RAM (a few gigabytes) and the common tasks involve brute scanning over the large quantity of that data (which is fine). The results of processing are exposed via a Web service (singleton/multithreaded) and presented using Silverlight-based client.
The problem is that various customers need different functionality/algorithms and I don't know how to provide extensibility on the server-side. For the client side (Silverlight) I can use MEF/Prism, but I'm not sure what would be a good approach to tackle this problem on the server.
Please note that ideally other web-services should have a direct access (i.e. without marshaling) to the data of the main/current service which holds the large data model.
Are there any:
a) frameworks/libraries
b) patterns
c) good pracitces
which would help me to modularize the application and make the selection of desired modules and their deployment relatively easy?
Sounds to me like Dependency Inversion is required: isolate logical parts of the system (algorithms, etc) by defining interfaces, then use a DI / IoC framework to load the desired implementation at runtime (or on application start, etc).
I haven't used Ninject, but plenty of people love it, so you could try that; there's also Spring.Net.
Good Practices:
Ensure you have clear precise logging so you know what's being used and when.
Think about whether you want a 'default' implementation to load if the desired one fails, or whether you deliberately want to fail so that the wrong data isn't returned by mistake (such as the use of a different algorythm).
I've found that using attributes to decorate injectable modules is really helpful (especially in a web-based system that you don't have immeadiate access to) one reason for this is that you can build pages or controls that list all the known / available implementations at runtime.
You can also use the attribute approach to build a UI that lets users select which one they want; I use it for an open source web-application framework I built: http://www.morphological.geek.nz/Morphfolia/Capabilities/AttributeDriven.aspx

Custom code access permissions

We have a server written in C# (Framework 3.5 SP1). Customers write client applications using our server API. Recently, we created several levels of license schemes like Basic, Intermediate and All. If you have Basic license then you can call few methods on our API. Similarly if you have Intermediate you get some extra methods to call and if you have All then you can call all the methods.
When server starts it gets the license type. Now in each method I have to check the type of license and decide whether to proceed further with the function or return.
For example, a method InterMediateMethod() can only be used by Intermediate License and All license. So I have to something like this.
public void InterMediateMethod()
{
if(licenseType == "Basic")
{
throw new Exception("Access denied");
}
}
It looks like to me that it is very lame approach. Is there any better way to do this? Is there any declarative way to do this by defining some custom attributes? I looked at creating a custom CodeAccessSecurityAttribute but did not get a good success.
Since you are adding the "if" logic in every method (and god knows what else), you might find it easier to use PostSharp (AOP framework) to achieve the same, but personally, I don't like either of the approaches...
I think it would be much cleaner if you'd maintained three different branches (source code) for each license, which may add a little bit of overhead in terms of maintenance (maybe not), but at least keep it clean and simple.
I'm also interested what others have to say about it.
Good post, I like it...
Possibly one easy and clean approach would be to add a proxy API that duplicates all your API methods and exposes them to the client. When called, the proxy would either forward the call to your real method, or return a "not licensed" error. The proxies could be built into three separate (basic, intermediate, all) classes, and your server would create instances of the approprate proxy for your client's licence. This has the advantage of having minimal performance overhead (because you only check the licence once). You may not even need to use a proxy for the "all" level, so it'll get maximum performance. It may be hard to slip this in depending on your existing design though.
Another possibility may be to redesign and break up your APIs into basic/intermediate/all "sections", and put them in separate assemblies, so the entire assembly can be enabled/disabled by the licence, and attempting to call an unlicensed method can just return a "method not found" error (e.g. a TypeLoadException will occur automatically if you simply haven't loaded the needed assembly). This will make it much easier to test and maintain, and again avoids checking at the per-method level.
If you can't do this, at least try to use a more centralised system than an "if" statement hand-written into every method.
Examples (which may or may not be compatible with your existing design) would include:
Add a custom attribute to each method and have the server dispatch code check this attribute using reflection before it passes the call into the method.
Add a custom attribute to mark the method, and use PostSharp to inject a standard bit of code into the method that will read and test the attribute against the licence.
Use PostSharp to add code to test the licence, but put the licence details for each method in a more data driven system (e.g. use an XML file rather than attributes to describe the method permissions). This will allow you to easily change the licensing across the entire server by editing a single file, and allow you to easily add whole new levels or types of licences in future.
Hope that gives you some ideas.
You might really want to consider buying a licensing solution rather than rolling your own. We use Desaware and are pretty happy with it.
Doing licensing at the method level is going to take you into a world of hurt. Maintenance on that would be a nightmare, and it won't scale at all.
You should really look at componentizing your product. Your code should roughly fall into "features", which can be bundled into "components". The trick is to make each component do a license check, and have a licensing solution that knows if a license includes a component.
Components for our products are generally on the assembly level, but for our web products they can get down to the ASP.Net server control level.
I wonder how the people are licensing the SOA services. They can be licensed per service or per end point.
That can be very hard to maintain.
You can try with using strategy pattern.
This can be your starting point.
I agree with the answer from #Ostati that you should keep 3 branches of your code.
What I would further expand on that is then I would expose 3 different services (preferably WCF services) and issue certificates that grant access to the specific service. That way if anyone tried to access the higher level functionality they would just not be able to access the service period.

Categories

Resources