I have recently learned that you can provide cascadingValues to the entire project by wrapping the Router component in the provider Microsoft doc. How is this different to using dependency injection with a singleton pattern? (I know how injection works, I mean performance and architecture wise)
Which do you think is better to use?
There has been some discussion about the performance impact of CascadingValues. I can recommend this article (it is an excellent tutorial as well).
As you mentioned, there are two aspects: performance and architecture.
Performance
I'd see [CascadingParameter] are costly compared to [Parameter] or "normal" fields and properties. Each component down the tree, subscribe to the [CascadingParameter] changes. If the value changes, a new cycle of ParamtersSet is started, which could lead to a call to the render tree and check if something in DOM needs to be changed. So, even if no rerendering is required, the process to reach this conclusion consumes time. The more components, the more depth the tree has, the slower this process becomes.
Architecture
To discuss this aspect, we can think about the CascadingAuthenticationState. It is part of the authentication framework for Blazor and provides access to check whether a user is authenticated or not. It is implemented as a cascading value instead of a singleton. Components down the tree, like menus, can easily use this value to hide/show items for non authenticated users. Why?
Frequency of change and impact to the DOM
A question to answer is regarding the impact of the change of a cascading value. If a user logins/log out, it is reasonable to assume that this will trigger a huge DOM change. So, checking a huge part of the tree (if not the entire based on where the cascading value is placed) is not overhead.
Besides, it is a good guess that there will be few changes to AuthenticationState during the lifetime of the application.
Simplicity
The menu component uses the AuthorizeView which uses the cascading parameter of Task<AuthenticationState>.
<AuthorizeView>
<Authorized>
<li>Admin</li>
</Authorized>
<NotAuthorized>
<li>Log in</li>
</NotAuthorized>
</AuthorizeView>
This snippet is easy to read, and you can understand it very quickly. If you did the same thing with a singleton service, you would need to implement the "communication" between component and service. It would be best to implement a subscribe/unsubscribe scenario, maybe using events or more advanced technologies. You will need to write your own code, and again don't forget to write your implementation of IDisposable to unsubscribe.
Cascading parameters are focused on UI
While a singleton service is a very generic approach to solve many different issues, cascading values are specially designed to solve UI update problems. They are doing it very efficiently. In the case of the AuthenticationState, it uses a specialized view.
There is space for arguments if Blazor isn't all about UI, but with modern, rich features GUI, sometimes we have a layered approach inside the application. So, with UI, I mean the part of the application ultimately responsible for rendering.
Services could be used outside of this inner UI layer, through the entire application, and then reused in the UI as well, while cascading parameters could only be used inside components.
Cascading parameters are (mostly one way)
A cascading parameter is "owned" by a component. Usually, the component where it is declared. From that point, it is passed down the tree. All other components can consume it. There is no straightforward, easy, and scalable way to update a value from a child component. As I said, mostly, there are ways to do it, but it is a dirty path, in my view.
Summary
As with a lot of other technologies, the answer is: It depends on the use case.
top-down usage: cascading values
Components need to update the value: service
many changes: It highly depends on the tree structures if easiness outweighs the performance impact.
use outside and inside the inner UI layer: service
And, another approach to this problem could be something like Blazor Component Bus
Update
In addition to what was said by Just the benno, I may add that there is a fundamental difference between a CascadingValue component and a Singleton service regarding their scope. A Singleton service, in Blazor Server App, is singleton across the lifetime of the application, across multiple connections, and across multiple browsers... while CascadingValue component is scoped to the current instance of your app. Changing the state of an object provided by a CascadingValue component has no effect on a new instance of the app. But if you change the state of a Singleton service in an instance of your app, this change will be propagated to other instances of your app. Try to imagine what would be the implications of implementing the functionality of the CascadingAuthenticationState component as a Singleton service rather than CascadingValue.
Related
https://msdn.microsoft.com/en-us/library/microsoft.practices.unity.perrequestlifetimemanager(v=pandp.30).aspx states that:
Although the PerRequestLifetimeManager lifetime manager works correctly and can help in working with stateful or thread-unsafe dependencies within the scope of an HTTP request, it is generally not a good idea to use it when it can be avoided, as it can often lead to bad practices or hard to find bugs in the end-user's application code when used incorrectly. It is recommended that the dependencies you register are stateless and if there is a need to share common state between several objects during the lifetime of an HTTP request, then you can have a stateless service that explicitly stores and retrieves this state using the Items collection of the Current object.
What kind of bugs or bad practices is the warning refering to? How would one use it incorrectly? - Unfortunately the warning is not very specific and is therefore hard to apply to the real world. Furthermore it is not clear to me what stateful means in this context.
IMHO a typical scenario to use the PerRequestLifetimeManager would be some kind of database connection (e.g. DbContext) or similiar.
Its purpose would be to only instantiate one instance per request, which could (for example) prevent redundant operations and lookups during the course of a single request.
The danger is if someone assumes that the object created is a good place to store state during the request. The idea of dependency injection is that a class receives a dependency (commonly an interface) and doesn't "know" anything about it at all except that it implements that interface.
But someone could reason that if the object is going to persist throughout the life of the request then it's a good place to maintain state during the request. So they create a complex scenario where one class receives the dependency, stores some information in it (like setting a property), and then another class receives that same dependency and expects to read that property.
Now the purpose of dependency injection (decoupling) has been defeated because classes have built-in assumptions about what the lifetime of that dependency is, and may even include assumptions about what other classes have done or will do with the state of that object. That creates a tangled mess where the interaction between classes is difficult to perceive - even hidden - and so it's easy to break.
Let's say someone determines that the lifestyle of that dependency should be transient, not per web request. Suddenly all of the behaviors of those classes that depend on it stop working as expected. So developers look at those classes and see that nothing has changed. What happened? The interaction between those classes was hard to see in the first place, so when it breaks the problem will be hard to find. And if there was some valid reason why the lifestyle of that dependency was changed then the problem is going to be even harder to fix.
If we need to store state during a request then we should put it in "normal" places like in the HttpContext. There's still room there for some confusing practices and bugs, but at least we know that the HttpContext is (by definition) going to be tied to a particular request.
Right now I am coding an application and am thinking that there has to be a better solution to what I am doing right now.
I have a main window which shall handle the settings of the program. Then I have further classes and windows. For example a language handler class and a form that is handling the user input needed for the "main function".
However, until now I always have to pass my main window to each of this classes, because the language handler shall be able to change the main window's strings. And the other form should also be able to pass data to the main Window.
If we imagine there will be much more classes and every class needs a copy of the main window this would consume a lot of resources depending on the main window's "size".
So, is there a better/more efficient way to communicate between these classes.
Common way to do that is to use observer pattern, which in .NET is events system. Simply said, your classes subscribe to each other's events and perform action when event is raised. As noted in comment, passing references is not memory heavy, but it results in tight coupling between different pieces of your code - observer pattern addresses that problem.
Another option is to consider you classes as services. Code them to an interface and then use dependency injection (aka Inversion of Control) to build up the object graph (You tell the IoC container you want a frmSomething and it will determine what services/classes it needs and instantiate them as appropriate).
This means that:
you only ever have to code against an interface not an implementation
your code is loosely coupled (You can swap an OldTranslator for a NewTranslator and as long as they both comply to the same interface, nothing has to be changed except the configuration of the container)
you can develop high-level features which rely on services that haven't been written yet and your code will compile
You can very easily change how your app works, at run-time if needs be, by changing what classes/services are registered in your container.
Have a look at Unity for the MS-Supported DI container. Castle Windsor is a popular alternative but there are many more
It's worth noting that passing a "Copy" of the main window around as you've said is not a bad thing - You're actrually only passing a reference (effectively a pointer) to the main window (since anything more complex than the real primitives are reference types). This means that there's very little overhead whatsoever
I would suggest you to use Galasoft or Prism MVVM implementations. There you can use their messaging service which is quite easy to use. The class that needs info just sends a message to the subscriber and they in turn can send all data needed. I think that this is the easiest way to handle communication.
in addition to the ans given by IVAN.. if we look at a higher level view without all those terminologies then you should probably create a static class which would server as InMemoryStorage and defines fields on it to save information
this what you will have complete control over what is being shared and multiple components can change it
moreover you can defined getters and setters and raise an event whenever the property is changed so that different forms or windows (views) can subscribe to the change and take action accordingly
I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.
I'm interested in ways to control composition scoping with MEF.
The most obvious example - web applications, where you have to create certain subset of components per request and dispose of them when the request is finished.
However, a general implementation of scoping may be useful in other contexts as well.
I'm looking at MEF2 preview and trying to make sense of it, but don't see a complete solution for some reason.
On one hand, there is this MVC integration module, where MEF is nice enough to take care of request scope for me, but that is not very usable outside of MVC (and outside of web for that matter), is it?
On the other hand, in the first preview-related post "What's new in MEF2", I've seen this thing called CompositionScopeDefinition that looks like an explicit specification for scopes, but with that one, I don't see a way to "close" the scope. To put it in other words: how does MEF determine when to dispose of components that were created within a scope?
And on third hand (yep :-), with MEF v1, I used to deal with scoping by creating nested CompositionContainers, but that doesn't work very well with custom ExportProviders.
What would really like to see is something like:
using( var scope = compositionContainer.OpenScope( /* some scope definition here */ ) )
{
var rootComponent = scope.GetExport<MyRootComponent>(); // The component graph gets composed at this point
rootComponent.DoYourScopedThing();
} // The component graph gets disposed at this point
If I had that thing, I could easily build MVC integration on top of it, but I could also use it in other contexts.
So, the question again: what do you use to deal with scoping problems like that? Or do you say MEF is not yet mature enough for serious use?
Good question- we are working on more documentation that should answer your question about CompositionScopeDefinition. Short version; CSD is used via an ExportFactory<T>, where CreateExport() returns a handle that is used to control the lifetime of the scope.
However, CSD is intended and optimized for desktop application scenarios; as you have no doubt seen, the MVC integration uses filtered catalogs and nested containers to control lifetime. This is still the recommended approach for 'transactional'-type lifetime in web and other work-processing scenarios.
it would be good to know more about the problems you face using custom ExportProviders wih this approach.
A stronger 'custom' lifetime story is something we're very much working towards; letting us know about where MEF 2 falls short for your scenarios, especially via the CodePlex discussion forum, is a great help.
I've found this post searching for details about CSD.
I want to use MEF to create extensible WPF application which has screen navigation that allow the client to open screen after screen inside a single window.
Each screen should have access to parts setup by previous screens and also have the ability to override some parts.
For example, when the user open a ProcessView it should have a ProcessProvider part which may be imported by screen navigated from the ProcessView, let's say ActivityView. The ActivityView should have access to the ProcessProvider so it will have context on which to operate.
Another example is that the root screen may have a ProcessListProvider which by default return all processes in the database. A screen that want to open the ProcessListView will need to somehow override the root ProcessListProvider with a customized ProcessListProvider so the ProcessListView will still work but with the customized process list provider.
I hope I was able to communicate my requirements.
Ido.
I have developed some MVVM based WPF code and am in need of some minor refactoring but prior to doing that I need to decide the best architecture.
I originally started with an application that could present several similar (but separate) representations of my data. Let's call it RecordsViewModel which had a corresponding RecordsView. As time went on, I introduced a SettingsViewModel which was passed into the constructor of the RecordsViewModel and published visibly (allowing RecordsView to use it). The SettingsViewModel is registered to so that changes are reflected in all of my views.
Now I want to split RecordsView a little because it now contains two distinct views.
The problem I have is:
the new (RecordsMainView and RecordsAlternativeView) both want to see Settings.
unlike the earlier RecordsView which is programmatically instantiated, these new views are instantiated from Xaml (default constructor).
So my options seem to be:
Walk the Tree Model upwards to find a parent with a Settings
Make Settings a DependencyProperty on the controls and make the Xaml join the Property to the instance.
Make SettingsViewModel a Singleton.
Any other, better, options? Which would you consider best?
I would turn your settings logic into a service (ISettingsService) and use service locator or dependency injection to get at that service from whichever view models need it.
Services are great for managing shared state, and service locator / DI makes it very easy for your VMs to get a reference to the service. Storing shared state in a VM is a bit hacky and - as you've found - doesn't really scale. A good rule of thumb might be to ask yourself whether the state in the VM exists only to support the corresponding view, or whether other components will need access to that state. If the latter, move it into a service.