Which dependency injection lifetime should be preferred? - c#

For a REST api, that has no dependencies between requests and is using ASP.NET Core DI.
I have heard conflicting arguments when choosing the most appropriate:
Prefer transient and then scoped and avoid singleton due to memory and multi-threading issues and also because singletons should not be injected with other lifetimes due to captive dependencies
Prefer singletons to save time instantiating objects and prevent opening multiple connections
I understand they work differently but is there a 'go-to' lifetime? Should it start with 'transient' and move to others as required?
Are there any investigations proving the instantiation time saved by singleton is actually relevant?

I'm not going to say there's a right or wrong way here, but I'll share my personal experience.
I've tried both ways, and ended up finding that it's best to start with transient and then expand the scope as necessary.
Think in terms of the Single Responsibility Principle and Reasons for Change. The reason that your service's lifetime scope may need to change will likely be linked to a change you make to the service implementation itself. When that Reason happens, you want to only need to change a single class.
If you have a service that needs to be longer-lived than most of your other services, all you have to do is make it longer-lived and inject factories for any shorter-lived dependencies into it.
On the other hand, if you have a class that needs to be shorter-lived than most of your other services, you'll end up having to inject it as a factory into all the longer-lived services. So one Reason for Change impacts code all over your code base.
Of course, this is assuming you're doing things correctly: avoiding captive dependencies and remembering to make non-thread-safe services transient. I find it's far easier to remember which services need to be long-lived and override the defaults in that direction, than it is to remember which services shouldn't be long-lived and override the defaults that way. And I've found that failure to do the latter leads to much more insidious bugs that are harder to notice and result in greater damage.
With regards to performance: I haven't done performance testing with the ASP.NET Core built-in DI framework, but I've found that SimpleInjector is fast enough that the time and memory overhead of creating new instances of services is trivial in comparison with the other work my app does (like querying databases).
With regards to preventing opening multiple connections: SQL Server connections are pooled automatically so the cost to creating a new SqlConnection() and disposing it several times in a request is generally pretty trivial.

Should it start with 'transient' and move to others as required?
Yep. Transient services are simpler to implement correctly, and simply instantiating an object every time a dependency is resolved would normally not cause a significant performance problem.

Related

Should a static class dispose of its IDisposable variables in a "static destructor"?

If a static class has any IDisposable static variables, should that class have a "static destructor" to dispose of them? For example:
public static StaticClass
{
static SomeDisposableType
someDisposable = new SomeDisposableType();
static readonly StaticDestructor
staticDestructor = new StaticDestructor();
private sealed class StaticDestructor
{
~StaticDestructor()
{
someDisposable.Dispose();
}
}
}
No, it should not.
There's no way for the runtime to know when your static class will be used for the last time. There's no way it can know to invoke cleanup early.
Therefore, the only "sensible" time to perform cleanup would be when the entire process is about to terminate. But that's not the right time to be cleaning up either. For a similar, unmanaged, take on this, read Raymond Chen's When DLL_PROCESS_DETACH tells you that the process is exiting, your best bet is just to return without doing anything:
The building is being demolished. Don’t bother sweeping the floor and emptying the trash cans and erasing the whiteboards.
Now, some may bring up the argument that some of your disposables may represent external resources that will not be cleaned up/released when the OS destroys your process. Whilst that's true, those external resources have to cope with e.g. your process being terminated by the user or (if not co-located on the same machine) a power supply failure taking the entire machine away. You don't get to run any cleanup code when there's a power cut. So they already have to be coded to deal with your process not being able to release the resources.
There are some code smells happening here.
StaticClass is tightly coupled to the specific types that it depends upon, rather than just their interfaces.
StaticClass determines the lifetime of the services that it uses.
This is preventing StaticClass from being thoroughly unit-testable. For example, you cannot test the behavior of StaticClass without also testing the behavior of SomeDisposableType.
I'd almost always recommend making your StaticClass non-static, and using constructor injection to inject the services it depends on as interfaces, allowing a Dependency Injection framework's configuration to determine the lifetime of those objects.
If there's no compelling reason to have StaticClass be a singleton, then just let it be transient. Your DI framework should take care of cleaning up the disposable that gets injected into it.
If there is a compelling reason to have StaticClass be a singleton, think really hard about your separation of concerns: is StaticClass doing too much? For example, maybe it's doing some work to find values, and then storing those values to avoid doing that work again later. Or perhaps it's saving the state of certain properties of your application, and acting based on that state. In these cases, you can usually separate the state-saving or memoizing/caching work in a separate class that can be singleton-bound. Then your service that consumes this state or cached values can still be transient, and its disposable dependencies can still be disposed after it's done a specific job.
If, after considering all of the above, you're still convinced this class needs to have a long lifetime, you should carefully consider the lifetime of your disposable dependency. Usually if a class is disposable, that's because it holds on to resources that should be released from time to time. In that case, rather than injecting that class directly, perhaps you should inject a factory which you can use to construct the service on-demand and then dispose it as soon as an action is complete via a using statement.
It's hard to make more specific recommendations without knowing more about your specific classes, but these are the patterns I've found to work best in the vast majority of cases.

Using transient factories in Castle Windsor

If you use the TypedFactoryFacility in Windsor to generate factories for you based on an interface, the factory itself can be registered as Transient. It turns out that the factory will then release all transient objects that it created when the factory itself is disposed after release.
container.AddFacility<TypedFactoryFacility>();
container.Register(
Types.FromThisAssembly().BasedOn<IFactory>().Configure(
x => x.AsFactory()).LifestyleTransient());
This means that if I create the auto-generated factories as transient it neatly allows me to forget about Releaseing the object created from the factory (as long as I manage the lifetime of the factory itself, of course). The lifetime of such a transient factory would be tied to some other transient object, so when that object gets released, the factory is released, and everything that factory created.
Personally, I hate having to remember to explicitly Release() objects, and this cleans up the code a lot, so this makes me very happy. But the question is: Is this reasonable practice? It all just seems too easy.
Is there a downside to creating a new factory instance each time a factory is required?
Am I digging myself a terrible hole?
The automatic factory mechanism in Castle is based on Castle.DynamicProxy; I imagine that if you use the factory as a transient component you have to pay for the creation of the ProxyGenerator at each resolution, unless some other caching mechanism exists.
The documentation warns against recreating the the ProxyGenerator each time:
If you have a long running process (web site, windows service, etc.)
and you have to create many dynamic proxies, you should make sure to
reuse the same ProxyGenerator instance. If not, be aware that you will
then bypass the caching mechanism. Side effects are high CPU usage and
constant increase in memory consumption
However it is entirely possible that there is some mechanism that prevents this problem in Windsor, or that the documentation is obsolete.
My recommendation: test with a loop on your transient factory resolution, then resolve your component; keep an eye on memory (also loaded assemblies; maybe the proxied code is loaded in a side assembly) and processor usage to determine whether or not your trick is viable
So the deeper question is related to IDisposable. Why are you using it? Are you really keeping track of unmanaged/limited resources? Are you using it to really dispose something or as a shutdown protocol?
All these will go away if you change Windsor's release policy -- thus making you take care of when/if [to] call dispose. The container is trying to help you by keeping track of what and when to dispose things, but it can only do so much, so sometimes it's better to step up and take over the responsibility.
Another option is to turn off the tracking and expose a service that keeps track of things to dispose. This can be more semantical and understand your business/infrastructure needs.
In sum: what you're doing is fine, but it's not all that common to instantiate a transient factory every time you need the factory to create something. Make sure you document this for the your own future self.

Should the DbContext in EF have a short life span?

I have a few long running tasks on my server. Basically they are like scheduled tasks - they run from time to time.
They all required access to the DB and I use Entity Framework for that. Each task uses a DbContext for access.
Should the DbContext object be recreated on every run or should I reuse it?
I should say "it depends" as there are probably scenarios where both answers are valid, however the most reasonable answer is "the context should be disposed as soon as it is not needed" which in practice means "dispose rather sooner than later".
The risk that comes from such answer is that newcomers sometimes conclude that the context should be disposed as otfen as possible which sometimes lead to a code I review where there are consecutive "usings" that create a context, use it for one or two operations, dispose and then another context comes up next line. This is of course not recommended also.
In case of web apps, the natural lifecycle is connected with a lifecycle of web requests. In case of system services / other long running applications one of lifecycle strategies is "per business process instance" / "per usecase instance" where business processing / use case implementations define natural borders where separate instances of contexts make sense.
Yes, DbContext should only live for a very short time. It is effectively your unit of work
You should definitely create it each time you're going to use it. (Well, you should inject it but that's another discussion :-))
Update : OK, I accept that 'create it each time you're going to use it' could be misleading. I'm so used to context being an instance on a class that is injected and so lives only for the life of a request that I struggle to think of it any other way... #wiktor's answer is definitely better as it more correctly expresses the idea that you should "dispose sooner rather than later"

WSTrustChannelFactory as a singleton. Is it the best practice?

I'm working with an application that uses WSTrustChannelFactory to create trust channels, I noticed that the code is creating a new WSTrustChannelFactory everytime a new channel is needed.
I've never worked with this before but since this is a factory I suppose it can be implemented as a singleton.
Am I right? If so, is there any additional consideration to take (will the factory always be "usable" or there are any scenarios/exceptions where it should be replaced with a new instance)?. Also, is the factory creation an expensive operation, such as a WCF ChannelFactory creation?
Am I right?
Yes, I think you are. I've worked on several projects where we used a channel factory and each time it was a singleton. It certainly has its limits and can become a bottleneck under a very high load, but for a lot implementations I think you are fine.

IoC Conflicts within a WCF Service

We've created several WCF services that process asynchronous requests. We're using basicHttpBinding, consequently our InstanceContextMode is PerCall and this is what's causing a little confusion. We're seeing unusual behavior from those parts of the application being injected using Microsoft's Unity container.
We're resolving the reference below to create a singleton of Foo that is used throughout the application. However, when the service is hit in quick succession, Foo will occasionally throw exceptions that indicate that it is being accessed by multiple threads and having its state changed in unexpected ways as a result.
Container.RegisterType<IFoo, Foo>(new ContainerControlledLifetimeManager());
Now, if we change the lifetime manager to TransientLifetimeManager - essentially telling the container to inject a new instance of the class every time it's resolved, the issue is corrected.
Container.RegisterType<IFoo, Foo>(new TransientLifetimeManager());
From my understanding, the WCF doesn't control the lifetime of the AppDomain, the host does. In our case, that is IIS. So, given this information is it possible that our PerCall WCF requests are working correctly, but due to how the AppDomain is being managed, could we be accessing the same injected object due to its singleton implementation?
Thanks for your time!
Have a look at UnityWcf. I have tried a couple of different approaches to aligning the lifetime of objects in Unity to the InstanceContextMode in WCF. This works very well:
http://unitywcf.codeplex.com

Categories

Resources