I'm interested in ways to control composition scoping with MEF.
The most obvious example - web applications, where you have to create certain subset of components per request and dispose of them when the request is finished.
However, a general implementation of scoping may be useful in other contexts as well.
I'm looking at MEF2 preview and trying to make sense of it, but don't see a complete solution for some reason.
On one hand, there is this MVC integration module, where MEF is nice enough to take care of request scope for me, but that is not very usable outside of MVC (and outside of web for that matter), is it?
On the other hand, in the first preview-related post "What's new in MEF2", I've seen this thing called CompositionScopeDefinition that looks like an explicit specification for scopes, but with that one, I don't see a way to "close" the scope. To put it in other words: how does MEF determine when to dispose of components that were created within a scope?
And on third hand (yep :-), with MEF v1, I used to deal with scoping by creating nested CompositionContainers, but that doesn't work very well with custom ExportProviders.
What would really like to see is something like:
using( var scope = compositionContainer.OpenScope( /* some scope definition here */ ) )
{
var rootComponent = scope.GetExport<MyRootComponent>(); // The component graph gets composed at this point
rootComponent.DoYourScopedThing();
} // The component graph gets disposed at this point
If I had that thing, I could easily build MVC integration on top of it, but I could also use it in other contexts.
So, the question again: what do you use to deal with scoping problems like that? Or do you say MEF is not yet mature enough for serious use?
Good question- we are working on more documentation that should answer your question about CompositionScopeDefinition. Short version; CSD is used via an ExportFactory<T>, where CreateExport() returns a handle that is used to control the lifetime of the scope.
However, CSD is intended and optimized for desktop application scenarios; as you have no doubt seen, the MVC integration uses filtered catalogs and nested containers to control lifetime. This is still the recommended approach for 'transactional'-type lifetime in web and other work-processing scenarios.
it would be good to know more about the problems you face using custom ExportProviders wih this approach.
A stronger 'custom' lifetime story is something we're very much working towards; letting us know about where MEF 2 falls short for your scenarios, especially via the CodePlex discussion forum, is a great help.
I've found this post searching for details about CSD.
I want to use MEF to create extensible WPF application which has screen navigation that allow the client to open screen after screen inside a single window.
Each screen should have access to parts setup by previous screens and also have the ability to override some parts.
For example, when the user open a ProcessView it should have a ProcessProvider part which may be imported by screen navigated from the ProcessView, let's say ActivityView. The ActivityView should have access to the ProcessProvider so it will have context on which to operate.
Another example is that the root screen may have a ProcessListProvider which by default return all processes in the database. A screen that want to open the ProcessListView will need to somehow override the root ProcessListProvider with a customized ProcessListProvider so the ProcessListView will still work but with the customized process list provider.
I hope I was able to communicate my requirements.
Ido.
Related
I have recently learned that you can provide cascadingValues to the entire project by wrapping the Router component in the provider Microsoft doc. How is this different to using dependency injection with a singleton pattern? (I know how injection works, I mean performance and architecture wise)
Which do you think is better to use?
There has been some discussion about the performance impact of CascadingValues. I can recommend this article (it is an excellent tutorial as well).
As you mentioned, there are two aspects: performance and architecture.
Performance
I'd see [CascadingParameter] are costly compared to [Parameter] or "normal" fields and properties. Each component down the tree, subscribe to the [CascadingParameter] changes. If the value changes, a new cycle of ParamtersSet is started, which could lead to a call to the render tree and check if something in DOM needs to be changed. So, even if no rerendering is required, the process to reach this conclusion consumes time. The more components, the more depth the tree has, the slower this process becomes.
Architecture
To discuss this aspect, we can think about the CascadingAuthenticationState. It is part of the authentication framework for Blazor and provides access to check whether a user is authenticated or not. It is implemented as a cascading value instead of a singleton. Components down the tree, like menus, can easily use this value to hide/show items for non authenticated users. Why?
Frequency of change and impact to the DOM
A question to answer is regarding the impact of the change of a cascading value. If a user logins/log out, it is reasonable to assume that this will trigger a huge DOM change. So, checking a huge part of the tree (if not the entire based on where the cascading value is placed) is not overhead.
Besides, it is a good guess that there will be few changes to AuthenticationState during the lifetime of the application.
Simplicity
The menu component uses the AuthorizeView which uses the cascading parameter of Task<AuthenticationState>.
<AuthorizeView>
<Authorized>
<li>Admin</li>
</Authorized>
<NotAuthorized>
<li>Log in</li>
</NotAuthorized>
</AuthorizeView>
This snippet is easy to read, and you can understand it very quickly. If you did the same thing with a singleton service, you would need to implement the "communication" between component and service. It would be best to implement a subscribe/unsubscribe scenario, maybe using events or more advanced technologies. You will need to write your own code, and again don't forget to write your implementation of IDisposable to unsubscribe.
Cascading parameters are focused on UI
While a singleton service is a very generic approach to solve many different issues, cascading values are specially designed to solve UI update problems. They are doing it very efficiently. In the case of the AuthenticationState, it uses a specialized view.
There is space for arguments if Blazor isn't all about UI, but with modern, rich features GUI, sometimes we have a layered approach inside the application. So, with UI, I mean the part of the application ultimately responsible for rendering.
Services could be used outside of this inner UI layer, through the entire application, and then reused in the UI as well, while cascading parameters could only be used inside components.
Cascading parameters are (mostly one way)
A cascading parameter is "owned" by a component. Usually, the component where it is declared. From that point, it is passed down the tree. All other components can consume it. There is no straightforward, easy, and scalable way to update a value from a child component. As I said, mostly, there are ways to do it, but it is a dirty path, in my view.
Summary
As with a lot of other technologies, the answer is: It depends on the use case.
top-down usage: cascading values
Components need to update the value: service
many changes: It highly depends on the tree structures if easiness outweighs the performance impact.
use outside and inside the inner UI layer: service
And, another approach to this problem could be something like Blazor Component Bus
Update
In addition to what was said by Just the benno, I may add that there is a fundamental difference between a CascadingValue component and a Singleton service regarding their scope. A Singleton service, in Blazor Server App, is singleton across the lifetime of the application, across multiple connections, and across multiple browsers... while CascadingValue component is scoped to the current instance of your app. Changing the state of an object provided by a CascadingValue component has no effect on a new instance of the app. But if you change the state of a Singleton service in an instance of your app, this change will be propagated to other instances of your app. Try to imagine what would be the implications of implementing the functionality of the CascadingAuthenticationState component as a Singleton service rather than CascadingValue.
I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.
I hope this question makes sense. Basically, I am looking for a set of guidelines, or even a tutorial, that will show how to make an application that can easily add and remove "modules" or "add-ins"
For example, in Microsoft Office, you will commonly see programs that you can download and install and they will just add an extra tab into Microsoft Word (for example) that will implement some new feature.
I have several applications that use basically the same data source, and I'd like to consolidate them and also leave open the possibility of adding more functionality in the future without 1. Requiring a brand new install and 2. Tweaking every piece of my code.
I'm looking for a place to start, mostly.
Thanks in advance.
**
Edit: To elaborate a little more...
The thing I have in mind specifically is an application that accesses a large set of data that is stored in text files and uses some of the data to create a few graphs and maybe some tables. I'd like the ability to add different graphs in the future using the same data. So, you can click Button_A and generate Graph_A, then a few weeks later, you can click Button_B and generate Graph_B.
It would be really nice if I could come up with a way that only required reading the data from the file(s) once, but I know that would involve having to adjust my DataReader class a bit.
One place to start would be to define an interface for your future modules, and build a utility that scans all the dll's therein, looking for classes that implement said interface.
Once you've found supporting classes you can create instances at runtime and add to your application. That's a common idiom in .NET for supporting "plug-ins"
The Activator class is a common way to create instances from a Type at runtime.
http://msdn.microsoft.com/en-us/library/system.activator.aspx
It's hard to give more details without more info in your question. Can you elaborate a bit?
Take a look at the Composite Application Library from Microsoft.
It is aimed at WPF but you could get some ideas from there.
As Adam said, the first thing to do is define the interface for your plugin modules - what can they expect to receive from the container, and what methods must the container be able to call?
As far as the container itself goes, I'm partial to MEF as a location technology; you can create catalogs and re-compose the system when new DLLs are added. I've built a similar system to this for parsing dissimilar files, and the composition capabilities of MEF are awesome for runtime discovery.
I'm currently working on a large real-time OLAP application. All data are hold in RAM (a few gigabytes) and the common tasks involve brute scanning over the large quantity of that data (which is fine). The results of processing are exposed via a Web service (singleton/multithreaded) and presented using Silverlight-based client.
The problem is that various customers need different functionality/algorithms and I don't know how to provide extensibility on the server-side. For the client side (Silverlight) I can use MEF/Prism, but I'm not sure what would be a good approach to tackle this problem on the server.
Please note that ideally other web-services should have a direct access (i.e. without marshaling) to the data of the main/current service which holds the large data model.
Are there any:
a) frameworks/libraries
b) patterns
c) good pracitces
which would help me to modularize the application and make the selection of desired modules and their deployment relatively easy?
Sounds to me like Dependency Inversion is required: isolate logical parts of the system (algorithms, etc) by defining interfaces, then use a DI / IoC framework to load the desired implementation at runtime (or on application start, etc).
I haven't used Ninject, but plenty of people love it, so you could try that; there's also Spring.Net.
Good Practices:
Ensure you have clear precise logging so you know what's being used and when.
Think about whether you want a 'default' implementation to load if the desired one fails, or whether you deliberately want to fail so that the wrong data isn't returned by mistake (such as the use of a different algorythm).
I've found that using attributes to decorate injectable modules is really helpful (especially in a web-based system that you don't have immeadiate access to) one reason for this is that you can build pages or controls that list all the known / available implementations at runtime.
You can also use the attribute approach to build a UI that lets users select which one they want; I use it for an open source web-application framework I built: http://www.morphological.geek.nz/Morphfolia/Capabilities/AttributeDriven.aspx
First off, I wish context based storage was consistent across the framework!
With that said, I'm looking for an elegant solution to make these properties safe across ASP.NET, WCF and any other multithreaded .NET code. The properties are located in some low-level tracing helpers (these are exposed via methods if you're wondering why they're internal).
I'd rather not have a dependency on unneeded assemblies (like System.Web, etc). I don't want to require anyone using this code to configure anything. I just want it to work ;) That may be too tall of an order though...
Anyone have any tricks up their sleeves? (I've seen Spring's implementation)
internal static string CurrentInstance
{
get
{
return CallContext.LogicalGetData(currentInstanceSlotName) as string;
}
set
{
CallContext.LogicalSetData(currentInstanceSlotName, value);
}
}
internal static Stack<ActivityState> AmbientActivityId
{
get
{
Stack<ActivityState> stack = CallContext.LogicalGetData(ambientActivityStateSlotName) as Stack<ActivityState>;
if (stack == null)
{
stack = new Stack<ActivityState>();
CallContext.LogicalSetData(ambientActivityStateSlotName, stack);
}
return stack;
}
}
Update
By safe I do not mean synchronized. Background on the issue here
Here is a link to (at least part of) NHibernate's "context" implementation:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Context/
It is not clear to me exactly where or how this comes into play in the context of NHibernate. That is, if I wanted to store some values in "the context" would I get "the context" from NHibernate and add my values? I don't use NHibernate, so I don't really know.
I suppose that you could look and determine for yourself if this kind of implementation would be useful to you. Apparently the idea would be to inject the desired implementation, depending on the type of application (ASP.NET, WCF, etc). That probably implies some configuration (maybe minimal if one were to use MEF to load "the" ICurrentSessionContext interface).
At any rate, I found this idea interesting when I found it some time ago while searching for information on CallContext.SetData/GetData/LogicalSetData/LogicalGetData, Thread.SetData/GetData, [ThreadStatic], etc.
Also, based on your use of CallContext.LogicalSetData rather than CallContext.SetData, I assume that you want to take advantage of the fact that information associated with the logical thread will be propagated to child threads as opposed to just wanting a "thread safe" place to store info. So, if you were to set (pr Push) the AmbientActivity in your app's startup and then not push any more activities, any subsequent threads would also be part of that same activity since data stored via LogicalSetData is inherited by child threads.
If you have learned anything in the meantime since you first asked this question I would be very interested in hearing about it. Even if you haven't, I would be interested in learning about what you are doing with the context.
At the moment, I am working on maintaining some context information for logging/tracing (similar to Trace.CorrelationManager.ActivityId and Trace.CorrelationManager.LogicalOpertionStack and log4net/NLog context support). I would like to save some context (current app, current app instance, current activity (maybe nested)) for use in an app or WCF service AND I want to propagate it "automatically" across WCF service boundaries. This is to allow logging statements logged in a central repository to be correlated by client/activity/etc. We would be able to query and correlate for all logging statements by a specific instance of a specific application. The logging statements could have been generated on the client or in one or more WCF services.
The WCF propagation of ActivityId is not necessarily sufficient for us because we want to propagate (or we think we do) more than just the ActivityId. Also, we want to propagate this information from Silverlight clients to WCF services and Trace.CorrelationManager is not available in Silverlight (at least not in 4.0, maybe something like it will be available in the future).
Currently I am prototyping the propagation of our "context" information using IClientMessageInspector and IDispatchMessageInspector. It looks like it will probably work ok for us.
Regarding a dependency on System.Web, the NHibernate implementation does have a "ReflectiveHttpContext" that uses reflection to access the HttpContext so there would not be a project dependency on System.Web. Obviously, System.Web would have to be available where the app is deployed if HttpContext is configured to be used.
I don't know that using CallContext is the right move here if the desire is simply to provide thread-safe access to your properties. If that is the case, the lock statement is all you need.
However, you have to make sure you are applying it correctly.
With CallContext, you are going to get thread-safe access because you are going to have separate instances of CallContext when calls come in on different threads (or different stores, rather). However, that's very different from making access to a resource thread-safe.
If you want to share the same value across multiple threads, then the lock statement is the way to go. Otherwise, if you want specific values on a per-thread/call basis, use the CallContext, or use the static GetData/SetData methods on the Thread class, or the ThreadStatic attribute (or any number of thread-based storage mechanisms).