I am using the multitenant container and each tenant has its own database + connectionstring registered in a InstancePerLifeTime scope. The tenant is identified using a subdomain which is mapped in a "master database" with a generated database name.
Now I have two use cases:
Use Case A: Creating new Tenants:
Someone fills in a registration form with the companyname, submits, and after submission we generate a new database and that tenant should be able to access the application under companyname.domain.com
However we want to do that without restarting the application which impacts all current tenants.
Let's say I want to add a new tenant, runtime. What is the best way to register this without restarting the application?
At first I thought about registering the container, inject it in my MVC Controller, and add the new registration runtime but after reading some questions this appears to be bad practice.
I could also get the DependencyResolver from within the Controller and access the container from there. Are there better practices available?
Use Case B: Register on demand
Assuming we have a big amount of tenants and want to prevent registering them all at once on application startup. We could register these in the multitenantcontainer on the first request when the subdomain can be matched to an existing account.
This might be premature optimization though, since basically we don't have lots of tenants yet.
But again, this would result in runtime registrations.
Container:
var tenantIdentificationStrategy= new TenantIdentificationStrategy();
var multitenantContainer = new MultitenantContainer(tenantIdentificationStrategy, builder.Build());
var tenants = new[]
{
"companyA.domain",
"localhost"
};
foreach (var id in tenants)
{
var databaseName = $"tenant-{id}";
multitenantContainer.ConfigureTenant(id, b =>
{
// Init RavenDB
b.Register(context => new RavenDocumentSessionFactory(databaseName))
.InstancePerTenant()
.AsSelf();
// Session per request
b.Register(context => context.Resolve<RavenDocumentSessionFactory>()
.FindOrCreate(context.Resolve<IDocumentStore>()))
.As<IDocumentSession>()
.InstancePerLifetimeScope()
.OnRelease(x =>
{
x.SaveChanges();
x.Dispose();
});
});
}
Your best bet is to hold a static reference to the application container somewhere and register your tenants from there. This is pretty common practice and, since your tenant registration code is going to have to "know" what a MultitenantContainer is anyway, it's not going to change your assembly references or spread the "knowledge" of the container around more than it would otherwise have to be.
Create the multitenant container at app startup.
Register the tenants you already know about.
Store the container in a static property somewhere that is globally accessible.
Reference the static property when you need to register a tenant.
Related
I'm trying to create an multi-tenant application using Autofac. The problem is, I cannot find a way to register a background service, so I have an instance of it per each tenant running (my goal is to avoid putting any code related to multitenancy in the background service itself).
When I try to register background service like this:
public void ConfigureContainer(ContainerBuilder builder)
{
builder.RegisterType<ValuesGenerator>().As<IHostedService>().InstancePerTenant();
}
only single instance of the service is executed and it fails to identify tenant (what even makes sense, because tenant identification strategy is request-based).
I also tried to register service in multitenant container:
public static MultitenantContainer ConfigureMultitenantContainer(IContainer container)
{
var tenantStore = container.Resolve<ITenantStore>();
var httpContextAccessor = container.Resolve<IHttpContextAccessor>();
var strategy = new TenantResolverStrategy(httpContextAccessor, tenantStore);
var mtc = new MultitenantContainer(strategy, container);
var tenants = tenantStore.GetTenants();
foreach (var tenant in tenants)
{
mtc.ConfigureTenant(tenant.Id, cb =>
{
cb.RegisterType<ValuesGenerator>().As<IHostedService>().SingleInstance();
});
}
return mtc;
}
However, this way, nothing happens at all. I've tried to add AutoActivate() after SingleInstance() and then I could see that services are activated (but not executed) and all instances failed to identify tenant again (I kind of hoped services that are in named tenant container would know their tenant by default). I've also tried to override tenant in tenant identification strategy (using both existing instance and trying to resolve new one) - but with no effect.
My questions are:
How can I register an instance of background service per tenant?
How can I set current tenant manually (if I need to set up something during startup, when tenant cannot be identified)
If I cannot achieve this with Autofac, is there any alternative?
I'm using BotFramework version(v4) integrated with LUIS. In ConfigureServices(IServiceCollection services) method in startup.cs file I'm assigning storage and LUIS in the middleware.Below is the sample code.
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton(configuration);
services.AddBot<ChoiceBot>(options =>
{
options.CredentialProvider = new ConfigurationCredentialProvider(configuration);
var (luisModelId, luisSubscriptionKey, luisUri) = GetLuisConfiguration(configuration, "TestBot_Dispatch");//
var luisModel = new LuisModel(luisModelId, luisSubscriptionKey, luisUri);
var luisOptions = new LuisRequest { Verbose = true };
options.Middleware.Add(new LuisRecognizerMiddleware(luisModel, luisOptions: luisOptions));
//azure storage emulater
//options.Middleware.Add(new ConversationState<Dictionary<string, object>>(new AzureTableStorage("UseDevelopmentStorage=true", "conversationstatetable")));
IStorage dataStore = new AzureTableStorage("DefaultEndpointsProtocol=https;AccountName=chxxxxxx;AccountKey=xxxxxxxxx;EndpointSuffix=core.windows.net", "TableName");
options.Middleware.Add(new ConversationState<Dictionary<string,object>>(new MemoryStorage()));
options.Middleware.Add(new UserState<UserStateStorage>(dataStore));
}
}
My bot will be getting requests from users of different roles such as (admin,sales,etc..).I want to change the table storage connection-string passed to middleware based on the role extracted from the incoming request. I will get user role by querying DB from the user-name which is extracted from the current TurnContext object of an incoming request. I'm able to do this in OnTurn method, but as these are already declared in middleware I wanted to change them while initializing in the middleware itself.
In .NET Core, Startup logic is only executed once at, er, startup.😊
If I understand you correctly, what you need to be able to do is: at runtime, switch between multiple storage providers that, in your case, are differentiated by their underlying connection string.
There is nothing "in the box" that enables this scenario for you, but it is possible if use the correct extension points and write the correct plumbing for yourself. Specifically you can provide a customized abstraction at the IStatePropertyAccessor<T> layer and your upstream code would continue to work at that level abstraction and be none-the-wiser.
Here's an implementation I've started that includes something I'm calling the ConditionalStatePropertyAccessor. It allows you to create a sort of composite IStatePropertyAccessor<T> that is configured with both a default/fallback instance as well as N other instances that are supplied with a selector function that allows them to look at the incoming ITurnContext and, based on some details from any part of the turn, indicate that that's the instance that should be used for the scope of the turn. Take a look at the tests and you can see how I configure a sample that chooses an implementation based on the ChannelId for example.
I am a little busy at the moment and can't ship this right now, but I intend to package it up and ship it eventually. However, if you think it would be helpful, please feel free to just copy the code for your own use. 👍
We are using ServiceStack with an OrmLiteCacheClient. We are using PostgreSQL and two different schemas within one database. I created custom interfaces for both connections (one for each schema in the db), and they both inherit from IDbConnectionFactory. How do I make certain that my cache is using the connection I want it to use?
You can't, they both use the same IDbConnectionFactory that's registered in your IOC.
I did something very similar to this in a recent website. I have a main OLTP database which shares data & processing with back-end systems, and a second database just to support Caching & front-end related information. The main thing is that the second DB can NOT be used to auto-wire into services, not without some additional jiggery-pokery.
Here's what I put in my AppHost Configure:
IDbConnectionFactory dbFactory = new OrmLiteConnectionFactory(
ConfigurationManager.ConnectionStrings["Database"].ConnectionString,
SqlServerDialect.Provider
);
// Register the main database as the default singleton
container.Register<IDbConnectionFactory>(dbFactory);
IDbConnectionFactory dbCacheFactory = new OrmLiteConnectionFactory(
// Create a second factory, but ONLY use it to instantiate the Cache
ConfigurationManager.ConnectionStrings["BottleDropCache"].ConnectionString,
SqlServerDialect.Provider
);
var cache = new OrmLiteCacheClient();
cache.DbFactory = dbCacheFactory;
cache.InitSchema();
container.Register<ICacheClient>(cache);
I have an application that use multiple Database.
i found out i can change that by using the connection builder. like so :
var configNameEf = "ProjectConnection";
var cs = System.Configuration.ConfigurationManager.ConnectionStrings[configNameEf].ConnectionString;
var sqlcnxstringbuilder = new SqlConnectionStringBuilder(cs);
sqlcnxstringbuilder.InitialCatalog = _Database;
but then i need to change the autofac Lifescope of UnitOfWork so that it will now redirect the request to the good Database instance.
what i found out after quite a while is that i can do it like this from a DelegatedHandler :
HttpConfiguration config = GlobalConfiguration.Configuration;
DependencyConfig.Register(config, sqlcnxstringbuilder.ToString());
request.Properties["MS_DependencyScope"] = config.DependencyResolver.GetRequestLifetimeScope();
The question is, is there any other way to do that, that change the MS_DependencyScope parametter of the request. This solution work but i think it is kind of shady.
here is the registry in DependencyConfig:
public static void Register(HttpConfiguration config, String bdContext = null)
{
var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.Register(_ => new ProjectContext(bdContext)).As<ProjectContext>().InstancePerApiRequest();
builder.RegisterType<UnitOfWork>().As<IUnitOfWork>().InstancePerApiRequest();
// Register IMappingEngine
builder.Register(_ => Mapper.Engine).As<IMappingEngine>().SingleInstance();
config.DependencyResolver = new AutofacWebApiDependencyResolver(builder.Build());
config.DependencyResolver.BeginScope();
}
The way the question is described and the way the answer to my comment sounds, you have the following situation:
The application uses per-request lifetime units of work. I see this from your registrations.
Only one database is used in the application at a given point in time. That is, each request doesn't have to determine a different database; they all use the same one until the connection string changes. This is seen in the way the database is retrieved from using a fixed application setting.
The connection string in configuration may change, at which point the database used needs to change.
Assuming I have understood the question correctly...
If the app setting is in web.config (as it appears), then changing the string in web.config will actually restart the application. This question talks about that in more detail:
How to prevent an ASP.NET application restarting when the web.config is modified?
If that's the case, you don't have any work to do - just register the database as a singleton and when the web.config changes, the app restarts, re-runs the app startup logic, gets the new database, and magic happens.
If the app setting is not in web.config then you should probably create a project context factory class.
The factory would serve as the encapsulation for the logic of reading configuration and building the connection to the database. It'll also serve as the place to cache the connection for the times when the setting hasn't changed.
The interface would look something like this:
public interface IProjectContextFactory
{
ProjectContext GetContext();
}
A simple implementation (without locking, error handling, logging, and all the good stuff you should put in) might be:
public class ProjectContextFactory : IProjectContextFactory
{
private ProjectContext _currentContext = null;
private string _currentConnectionString = null;
private const string ConnectionKey = "ProjectConnection";
public ProjectContext GetContext()
{
// Seriously, don't forget the locking, etc. in here
// to make this thread-safe! I'm omitting it for simplicity.
var cs = ConfigurationManager.ConnectionStrings[ConnectionKey].ConnectionString;
if(this._currentConnectionString != cs)
{
this._currentConnectionString = cs;
var builder = new SqlConnectionStringBuilder(cs);
builder.InitialCatalog = _Database;
this._currentContext = new ProjectContext(builder.ToString());
}
return this._currentContext;
}
}
OK, now you have a factory that caches the built project context and only changes it if the configuration changes. (If you're not caching the ProjectContext and are, instead, caching the database connection string or something else, the principle still holds - you need a class that manages the caching and checking of the configuration so the change can happen as needed.)
Now that you have a cache/factory, you can use that in your Autofac registrations rather than a raw connection string.
builder.RegisterType<ProjectContextFactory>()
.As<IProjectContextFactory>()
.SingleInstance();
builder.Register(c => c.Resolve<IProjectContextFactory>().GetContext())
.As<ProjectContext>()
.InstancePerRequest();
The ProjectContext will now change on a per request basis when the configured connection string changes.
Aside: I see odd stuff going on with the request lifetime scope. I see in your registration that you're creating your own request lifetime scope. With this method you shouldn't have to do that. If, however, you find that you still need to (or want to), you need to make sure both the originally-created lifetime scope and the one you created are disposed. Lifetime scopes do not get automatically disposed and do hang onto object references so they can handle disposal. There is a high probability that if you're not handling this properly then you have a subtle memory leak. The Autofac Web API integration will take care of creation and disposal of the request lifetime for you, but if you change out the request lifetime, odd things are going to happen.
I am building a MVC3 app using Ninject framework. I have a service that is time-consuming to initialize, and at the end this service will has an object that contains user-specific information, then I need to re-use that service as long as the user session is active, so that I can avoid to initialize that service again and again
So my question is
When I bind the service using Ninject what kind of scope should I pick, there is no session per scope in Ninject, so what is the best way to implement the requirement? or did I went to a wrong direction at all?
I've created a custom provider for one of my services that will create the service based on username details that is grabbed from current Controller.User.Identity.Name. The code below won't work because the userName local variable is missing, how can I pass the user name value into my custom provider via Ninject, so that I can pick it up from IContext??
public class TfsConnectionManagerProvider : Provider<TfsConnectionManager>
{
protected override TfsConnectionManager CreateInstance(IContext context)
{
Uri serverUri = new Uri(ConfigurationHelper.TfsServerUrl);
// Connect to the server without impersonation
using (TfsTeamProjectCollection baseUserConnection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(serverUri))
{
// Get the identity management service
IIdentityManagementService ims = baseUserConnection.GetService<IIdentityManagementService>();
// Get the identity to impersonate
TeamFoundationIdentity identity = ims.ReadIdentity
(
IdentitySearchFactor.AccountName,
userName, //NOTE: How can I get user name value from IContext???
MembershipQuery.None,
ReadIdentityOptions.None
);
// Connect using the impersonated identity
using (TfsTeamProjectCollection impersonatedConnection = new TfsTeamProjectCollection(serverUri, identity.Descriptor))
{
WorkItemStore store = impersonatedConnection.GetService<WorkItemStore>();
return new TfsConnectionManager
{
Store = store
};
}
}
}
}
A session scope is intentionally not offered in Ninject, because having services in a session state is wrong in almost every situation. You should be very carefully about using session state because it brings a lot of disadvantages.
Try to have a stateless application in first place.
If there is a good reason for having data in session scope then put that data (not the services) into the session state and use services that are in singleton, transient or request scope for the processing (separation of data and functionality).
I turn out to use custom Provider for creating the instance and in the custom provider I checked if it exists in session or not.
The binding is done as following
Bind<IRepository>().ToProvider(new TfsRepositoryProvider());
The custom Provider is below
public class TfsRepositoryProvider : Provider<TfsRepository>
{
private const string SesTfsRepository = "SES_TFS_REPOSITORY";
protected override TfsRepository CreateInstance(IContext context)
{
// Retrieve services from kernel
HttpContextBase httpContext = context.Kernel.Get<HttpContextBase>();
if (httpContext == null || httpContext.Session == null)
{
throw new Exception("No bind service found in Kernel for HttpContextBase");
}
return (httpContext.Session[SesTfsRepository] ?? (
httpContext.Session[SesTfsRepository] = new TfsRepository(context.Kernel.Get<IWorkItemStoreWrapper>()))
) as TfsRepository;
}
}
Okay, you can cache / store the user information in your application and only call the external service if you don't have (recent) user information. In your user information retrieval "layer", you just program those two possibilities.
Where you cache, it entirely up to you. You can store this information for example in a local database.
Apparently I understood you wrong, my apologies (below my original answer).
You can use for example an (abstract) factory that holds a static
member of your service (for example) so it will be reused.
Although depending on your service, this might have some unwanted side
effects (I did this once with Data Services and in an ASP.NET MVC3
application my data context was kinda screwed due to some magic that
happened). All I want to say with this is: be careful and test it
well.