Can you reset the DI container in .NET Core? - c#

We have several websites running on the same ASP.NET Core codebase. There's a SiteId in the appsettings.json that defines which website should be loaded from the database. We then have a Singleton set up:
builder.Services.AddSingleton<Site>(provider =>
{
short siteId = configuration.GetValue<short>("CoreSettings:SiteID");
var db = provider.GetRequiredService<SomeDatabaseService>();
return db.GetSite(siteId);
}
The Site object is very static. It has basic data like CompanyName and Phone that rarely changes. However, this data can occasionally change and we'd like the Site singleton to get updated.
I know that we could change the Site to scoped, but this seems like a lot of DI overhead when the Site can go months without changing. Plus we'd have to change a lot of other services that are obviously singletons to scoped, because they depend on the Site.
Is there a way to tell the DI container to reset? I basically want to say "Hey DI, throw everything away and rebuild".
Right now, we're having to reset the application pool in IIS to see any changes to the Site.

Is there a way to tell the DI container to reset? I basically want to say "Hey DI, throw everything away and rebuild".
No. At least not in the default DI container.
TBH sounds much more like caching problem then DI one. I would consider reworking approach for getting the Site by introducing in-memory cache with some expirations set up and/or management endpoint to fetch new values.
but this seems like a lot of DI overhead
I would say that switching to Scoped and corresponding DI overhead itself is very minimal, the only overhead worth considering is the unnecessary querying the database - but this should be mitigated via cache.
P.S.
If Site was not stored in the database but in configuration file another notable approach would be using Options pattern with IOptionsMonitor.

Related

Does adding a lot of repositories to the Startup.cs cause any issues?

So... in my .Net Core 2 Startup file I am adding my repositories to the scope in the ConfigureServices method like so...
public void ConfigureServices(IServiceCollection services)
{
var config = LoadConfiguration();
services.AddDbContext<DatabaseContext>(options => options.UseSqlServer(config.Connection, x => x.MigrationsAssembly("XXX")));
// Repositories
services.AddScoped<IUserRepository, UserRepository>();
services.AddScoped<ISecurityFunctionRepository, SecurityFunctionRepository>();
services.AddScoped<IUserSecurityFunctionRepository, UserSecurityFunctionRepository>();
services.AddScoped<ICustomerRepository, CustomerRepository>();
// ... lots more...
// other goodies
}
I know there are a million ways to setup a .Net Core 2 API, but my specific question is whether or not having 30+ repositories added to scope will cause any issues with the server or memory, OR if there is a better way to scope a ton of repositories.
In other projects I have created several APIs with their own repositories. That technically avoids this issue, but it is a hassle I would like to avoid.
Regardless of whether it’s a good idea to use the repository pattern with Entity Framework (Core) or not, generally there is no problem with having many registered services.
Large applications will easily end up having a very high number of services that have to be registered with the dependency injection container. The way it works, every registration is not really more than an item in a list. So there’s no problem with having many registrations at all. ASP.NET Core internally will already register a lot of services on its own.
It also does not matter for the registration what the lifetime of each service is. Every registration will basically be identical there, and the lifetime is just a configuration that is stored with the registered types.
You will also have to remember that the registration will only happen once when your application starts, so even if there was a performance issue, it likely wouldn’t affect the application run-time.
What does matter however is what happens when a service gets resolved: Here, the lifetimes also make a difference. A singleton service will only ever have a single instance, so its construction will only run once. A transient service will result in a new instance on every resolve. And a scoped service will have at most one instantiation per request.
But if you do not depend on a service, then that service will also not be constructed. So if you have 30 scoped services, but there is only one that will be resolved on each request, then there is no issue with having those other ones. This would only get relatively expensive when you have a long dependency graph (e.g. service A depends on B and C, and those depend on D, E, F, G, … and so on) where it’s just a lot of dependencies that need to be resolved.
However, in general, there isn’t a problem with having many (smaller) services that have specific purposes and as such will only be used in certain parts of your application. That is actually probably a good idea to design your application.

Need help avoiding the use of a Singleton

I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.

Is it possible to create a singleton WCF that is maintaining memory a per-user-session only?

I wanted to create a WCF that employs a singleton pattern but the service itself would not share the same memory as other users?
my WCF ServiceBehavior currently is setup to this:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
Unfortunately, there are instances that data is mixed up to two different users (which is a bad thing).
I could do a PerCall, but i would modify a whole lot of code :(
Just hoping there is still chance for my code.
Singleton = any private field in the class are shared among all calls to the class. If you want to have singleton service but separate data for each user you must store the data elsewhere (not in the service instance) - for example in the database and load them for every user's call.
Singleton service should be used in very rare cases. Most of the time usage of singleton service only means wrong architecture.
I think you are suffering from state full services.
An easy way to spot them is to just look at your class definition and if you find member variables in there , ask your self do you really need them being member.
if no just remove them
if yes try work out if you can put them cache(static, dynamic)
if cant put them in cache then try fetching them from your db.
here is what i followed from years of experience
1) Services should as stateless as possible. Or just think they are stateless.
2) For performance they can maintain two types of cache
a) static cache ( cache which is read only and stays same as long as set of services are serving)
mostly done during start of services. eg your workflow cache
b) Dynamic cache ( which can be refreshed time to time ) eg Authorisation cache

At which level should I apply dependency injection? Controller or Domain?

I would like to hear from you what are de the main advantages and drawbacks in applying dependency injection at the controller level, and/or domain level.
Let me explain; if I receive a IUserRepository as param for my User, I may proceed in two ways:
I inject IUserRepository direct on my domain object, then I consume User at controller level without newing objects, it means, I get them ready from the DI container.
I inject IUserRepository on my controller (say, Register.aspx.cs), and there I new all my domain objects using dependencies that came from the DI container.
Yesterday, when I was talking to my friend, he told me that if you get your domain objects from the container you loose its lifecicle control, as the container manages it for you, he meant that it could be error prone when dealing with large xml configuration files. Opinion which disagree as you may have a tests that loops through every domain object within an assembly and then asks the container whether thats a singleton, request scope, session scope or app escope. It fails if any of them are true. A way of ensuring that this kind of issue wont happen.
I fell more likely to use the domain approach (1), as I see a large saving on repetitive lines of code at controller level (of course there will be more lines at XML file).
Another point my friend rose was that, imagine that for any reason youre obligated to change from di container A to B, and say that B has no support for constructor injection (which is the case for a seam container, Java, which manipulates BC or only do its task via setter injection), well, his point is that, if I have all my code at controller level I'm able to refactor my code in a smoothly maner, as I get access to tools like Auto-Refactoring and Auto-Complete, which is unavailable when youre dealing with XML files.
Im stuck at this point, as I should have a decision to make right away.
Which approach should I leverage my architecture?
Are there other ways of thinking???
Do you guys really think this is a relevant concern, should I worry about it?
If you want to avoid an anemic domain model you have to abandon the classic n-tier, n-layer CRUDY application architecture. Greg Young explains why in this paper on DDDD. DI is not going to change that.
CQRS would be a better option, and DI fits very well into the small, autonomous components this type of architecture tends to produce.
I'm not into the Java sphere, but according to your details in your questions it seems like you use some kind of MVC framework (since you deal with Controllers and domain). But I do have an opinion about how to use DI in a Domain Driven architecture.
First there are several ways of doing DDD: Some uses MVC in presentation and no application service layer between MVC and Domain. Other uses MVP (or MVVM) and no service layer. BUT I think some people will agree on me that you very rarely inject repositories (or other services...). I would recommend to inject Repositories in Command (using MVC and no service layer), Presenter (if you use MVP) or Application Services (if you use service layer). I mostly use an application layer where each service get the repositories they need injected in constructor.
Second I wouldn't worry about switching between IoC containers. Most container framework today support ctor injection and can auto-resolve parameters. Now I know that you're a Java developer and I'm a MS developer, but MS Practices team has a Common Service locator that can helps you in producing code that are extremely non-dependent of which container framework you uses. There is probably some similar in the Java community.
So go for option 2. Hope I pushed you into right direction.

When to use Singleton vs Transient vs Request using Ninject and MongoDB

I'm not quite sure when I should use SingletonScope() vs TransientScope() vs RequestScope() when I do my binding in my global.cs file.
I have for example my call to MongoSession (using NoRM and the mvcStarter project http://mvcstarter.codeplex.com/) which is set to SingletonScope but I created a repository that use this MongoSession object to make calls to Mongo easier, e.g., I have a NewsRepository which uses MongoSession to fetch my News items from the data. As an example I have a call that fetches News items that has DisplayOnHome set to true and get the latest by CreationDate. Should such a repository be SingletonScope or would RequestScope would be more appropriate?
When should I use each of it and why?
In general in a web app, you want state to be request scope as much as possible.
Only in the case of very low level optimisations are you ever likely to run into a case where its appropriate to create singleton objects (and the chances even then are that you'll pull such caching / sharing logic out into another class which gets pulled in as a dependency on your other [request scope] objects and make that singleton scope). Remember that a singleton in the context of a web app means multiple threads using the same objects. This is rarely good news.
On the same basis, transient scope is the most straightforward default (and that's why Ninject 2 makes it so) - request scope should only come into the equation when something needs to be shared for performance reasons, etc. (or because that's simply the context of the sharing [as mentioned in the other answer]).
I guess the answer would depend on whether your MongoSession represents a unit of work or not. Most database related classes that I've worked with (mostly in the context of ORM, such as NHibernate or EF4) revolve around a context, entities, and tracked state that represent a unit of work. A unit of work should never be kept around longer than the length of time required to perform the given unit of work, after which the unit should be committed or rolled back. That would mean you should use RequestScope.
If your MongoSession is not a unit of work, you could keep it around for the lifetime of an MVC session, in which case SessionScope would then be appropriate.
From deleted question as requested by #shankbond above
The Disposal is not necessarily performed synchronously on your main request thread as one might assume.
You probably want to stash a Block and then Dispose() it at an appropriate phase in your request (how are you going to handle exceptions?)
Have a look in the Ninject Tests for more examples (seriously, go look - they're short and clear and I didnt regret it when I listened the 3rd time I was told to!)
See http://kohari.org/2009/03/06/cache-and-collect-lifecycle-management-in-ninject-20/
I am having this issue too, Lately, I started working on MongoDB. MongoDB recommends singleton for MongoClient. So I am still not sure about my implementation, and I am confused. I implemented the Mongo in the DI container two ways, and I am not sure which one is good. Let's take the first approach
Here I return a singleton instance of IMongoClient
services.AddSingleton(_ =>
{
return new MongoClient(con.ConnectionString);
});
Then,
services.AddScoped<IMongoDatabase>(s =>
{
var client = p.GetRequiredService<IMongoClient>();
return client.GetDatabase(con.DatabaseName);
});
Then, return a scoped for my IMongoDatabase. In my repo, I inject the IMongoDatabaseand then call my DB.
_dataContext = mongoDBClient.GetCollection<SomeCollection>(GetCollectionNameFromAppSetting((settings.DPUBotCollectionName)));
The second one I was returning an IMongoDatabase as a singleton:
services.AddSingleton<IMongoDatabase>(_ =>
{
//var connectionString = con;
return new
MongoClient(con.ConnectionString).GetDatabase("SomeDatabase");
});
Monog says their MonogClient and IMongoDatabase are thread-safe. I am not sure which approach is right. I would appreciate it if you could give me an answer.

Categories

Resources