.net Core 2, EF and Multi Tenancy - Dbcontext switch based on user - c#

I have the (almost) worst of multi tenancy. I'm building a asp.net core website that I'm porting a bunch of pokey little intranet sites to. Each subsite will be an asp.net Area. I have an IdentityContext for the Identity stuff. I have multiple copies of vendor databases, each of those with multiple tenants. The ApplicationUserclass has an OrgCode property that I want to use to switch the db context.
I can see myself needing something that maps User.OrgCode and Area to a Connection string
There are many partial examples of this on Stack Overflow. I am very confused after an afternoons reading. The core of it seams to be:
remove DI dbcontext ref from the constructor args.
Instantiate the dbcontext in the controller constructor.
Use dbcontext as before.
Am I on the right track?
Any coherent examples?
Edit 2020/07/09
This has unfortunately become more pressing.
The Identity database is tenant agnostic. Every user in Identity has an OrgCode identifier. (Custom user property).
Each server has multi tenancy built in through the use of 'cost centers'. The server has a collection of databases named the same on every server.
core vendor database
custom database where we store our extensions
logs database for our job output
There are also small application specific databases that already use an Org Code to identify a user
Server A - 1 Org Code
Server B - 4 Org Codes
Server C - 3 Org Codes engaged in project, 50+ not yet (mostly small)
Server D - No Org Codes engaged as of now. 80+ on server. (soon)
It is not possible to consolidate all the organisations onto one server. There are legal and technical ramifications. Each server has hundreds of remote transponders reporting to them that would need updating. The data these supply is what our custom jobs work with.
The dream is to continue to use DI in each page, passing in the contexts as required. The context would then be smart enough to pick the correct underlying connection details based on the OrgCode of the username.
I hesitate to use the word proxy because it seems heavily loaded in this space.
Hell, even using a switch statement would be fine if I knew where to put it
Desired effect User from Org XYZ loads page that requires Vendor database, they get the one from the server that XYZ maps to.
Edit 2020/07/13
To tidy up referenceing, I've switched the OrgCode and Server to Enums. The context inheritance is as follows
DbContext
CustLogsContext
public virtual ServerEnum Server
{
get
{
return ServerEnum.None;
}
}
DbSet (etc)
CustLogsServerAContext
public override ServerEnum Server
{
get
{
return ServerEnum.ServerA;
}
}
CustLogsServerBContext (etc)
CustLogsServerCContext (etc)
CustLogsServerDContext (etc)
VendorContext
VendorServerAContext
VendorServerBContext (etc)
VendorServerCContext (etc)
VendorServerDContext (etc)
I've also created a static class OrgToServerMapping that contains a dictionary mapping OrgCodes to Servers. Currently hardcoded, will change eventually to load from config, and add a reload method.
Currently thinking I need a class that collects the contexts Would have a Dictionary<serverEnum, dbcontext> and be registered as a service. Pretty sure I'd need a version of the object for each inherited dbcontext, unless someone knows ome polymorphic trick I can use

I work on a similar system with thousands of databases, but with LinqToSql instead of EF (I know...). Hopefully the general ideas translate. There are connection pool fragmentation issues that you have to contend with if you end up with many databases, but for just your four databases you won't have to worry about that.
I like these two approaches - they both assume that you can set up the current ApplicationUser to be injected via DI.
Approach #1: In Startup, configure the DI that returns the data context to get the current user, then use that user to build the correct data context. Something like this:
// In Startup.ConfigureServices
services.AddScoped<ApplicationUser>((serviceProvider) =>
{
// something to return the active user however you're normally doing it.
});
services.AddTransient<CustLogsContext>((serviceProvider) =>
{
ApplicationUser currentUser = serviceProvider.GetRequiredService<ApplicationUser>();
// Use your OrgToServerMapping to create a data context
// with the correct connection
return CreateDataContextFromOrganization(currentUser.OrgCode);
});
Approach #2: Rather than injecting the CustLogsContext directly, inject a service that depends on the active user that is responsible for building the data context:
// In Startup.ConfigureServices
services.AddScoped<ApplicationUser>((serviceProvider) =>
{
// something to return the active user however you're normally doing it.
});
services.AddTransient<CustLogsContextWrapper>();
// In its own file somewhere
public class CustLogsContextWrapper
{
private ApplicationUser currentUser;
public CustLogsContextWrapper(ApplicationUser currentUser)
{
this.currentUser = currentUser;
}
public CustLogsContext GetContext()
{
// use your OrgToServerMapping to create a data context with the correct connection;
return CreateDataContextFromOrganization(user.OrgCode);
}
}
Personally I prefer the latter approach, because it avoids a call to a service locator in Startup, and I like encapsulating away the details of how the data context is created. But if I already had a bunch of code that gets the data context directly with DI, the first one would be fine.

I have created a multitenancy implementation as follows (which could scale endlessly in theorie). Create a multitenancy database (say tenantdb). Easy. But the trick is to store connectionstring details for each tenant (your target databases). Along side your user orgCode etc.
I can see myself needing something that maps User.OrgCode and Area to a Connection string
So the way to map it in code is to feed your dbcontext whith your target tenant connectionstring, which you get from your tenantdb. So you would need anohter dbcontext for you tenantdb. So first call your tenantdb get the correct tenant connectionstring by filtering with your user orgcode. And then use it to create a new target dbcontext.
The dream is to continue to use DI in each page, passing in the contexts as required. The context would then be smart enough to pick the correct underlying connection details based on the OrgCode of the username.
I have this working with DI.
I created UI elements for crud operations for this tenantdb, so I can update delete add connection string details and other needed data. The Password is encrypted on save and decrypted on the get just before passing to your target dbcontext.
So I have two connection strings in my config file. One for the tenantdb and one for a default target db. Which can be an empty/dummy one, as you probably encounter application startup errors thrown by your DI code if you don't have one, as it will most likely auto search for a connectionstring.
I also have switch code. This is where a user can switch to anohter tenant. So here the user can choose from all the tenants it has rights to (yes rights are stored in tenantdb). And this would again trigger the code steps described above.
Cheers.
Took this Razor Pages tutorial as my starting point.
This way you can have very lousily coupled target databases. The only overlap could be the User ID. (or even some token from Azure,Google,AWS etc)
Startup.
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddRazorPages();
services.AddDbContext<TenantContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("TenantContext")));
//your dummy (empty) target context.
services.AddDbContext<TargetContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("TargetContext")));
}
IndexModel (Tenant pages).
public class IndexModel : PageModel
{
private readonly ContosoUniversity.Data.TenantContext _context;
private ContosoUniversity.Data.TargetContext _targetContext;
public IndexModel(ContosoUniversity.Data.TenantContext context, ContosoUniversity.Data.TargetContext targetContext)
{
_context = context;
//set as default targetcontext -> dummy/empty one.
_targetContext = targetContext;
}
public TenantContext Context => _context;
public TargetContext TargetContext { get => _targetContext; set => _targetContext = value; }
public async Task OnGetAsync()
{
//get data from default target.
var student1 = _targetContext.Students.First();
//or
//switch tenant
//lets say you login and have the users ID as guid.
//then return list of tenants for this user from tenantusers.
var ut = await _context.TenantUser.FindAsync("9245fe4a-d402-451c-b9ed-9c1a04247482");
//now get the tenant(s) for this user.
var SelectedTentant = await _context.Tenants.FindAsync(ut.TenantID);
DbContextOptionsBuilder<TargetContext> Builder = new DbContextOptionsBuilder<TargetContext>();
Builder.UseSqlServer(SelectedTentant.ConnectionString);
_targetContext = new TargetContext(Builder.Options);
//now get data from the switched to database.
var student2 = _targetContext.Students.First();
}
}
Tenant.
public class Tenant
{
public int TenantID { get; set; }
public string Name { get; set; }
//probably could slice up the connenctiing string into props.
public string ConnectionString { get; set; }
public ICollection<TenantUser> TenantUsers { get; set; }
}
TenantUser.
public class TenantUser
{
[Key]
public Guid UserID { get; set; }
public string TenantID { get; set; }
}
Default connstrings.
{ "AllowedHosts": "*",
"ConnectionStrings": {
"TenantContext": "Server=(localdb)\mssqllocaldb;Database=TenantContext;Trusted_Connection=True;MultipleActiveResultSets=true",
"TargetContext": "Server=(localdb)\mssqllocaldb;Database=TargetContext;Trusted_Connection=True;MultipleActiveResultSets=true"
}

Related

Which layer should read from the application configuration in an N-Tier application?

While working on several projects based on an N-Tier architecture I often noticed that I am not quite sure where to actually read from the configuration.
For example, let's say I have a project with an application layer, a business layer and a data layer. The business layer contains a function PerformImport() which performs a data import from a data source. The first step of this import is logging in to get access to the data from the data source. To do this, the function calls a function Login() which is implemented in the data layer. Should it:
Read the login username and password from the configuration and pass it to the Login() function or
Call the Login() function without parameters and have the credentials read in the function itself?
I can't really think about any reasons for or against the first or the second solution, so I am often not sure what to do here. This same question applies to many other possible situations, such as time intervals, URLs, database names or really anything that could be possibly stored in a configuration.
I was also thinking about reading it in the application layer and then passing it down to wherever the configuration entry is needed, but this would often result in a big list of parameters in the lower layers and just does not seem very efficient at all.
My answer will assume that you are using Dependency Injection.
My usual method to deal with this, is to define a Settings class next to the Implementation class. Register this Settings class in the DI container, and inject it in the Implementation class.
An example.
Lets assume we have a service which is defined by this interface.
public interface IMyService
{
// snip for brevity
}
And we have the implementation of it somewhere.
public class MyService : IMyService
{
// snip for brevity
}
Lets say that the service needs some settings. So define the settings class.
public class MyServiceSettings
{
public string UserName { get; set; }
public string Password { get; set; }
public int TimeOutInSeconds { get; set; }
}
Lets inject this Settings class into the Implementation.
public class MyService : IMyService
{
public MyService(MyServiceSettings settings)
{
this.settings = settings;
}
private readonly MySettings settings;
// snip for brevity
}
Now we can use the settings in the implementation whenever we want.
Now we need to register the Settings class in the DI container. Lets assume we have a container, and the IMyService is already registered. Now add the Settings class there.
public void CreateContainer()
{
var container = new Container();
container.RegisterScoped<IMyService, MyService>();
var myServiceSettings = new MyServiceSettings();
// TODO: Set values from configuration file, or a keyvault, or Azure Devops Variables, etc
container.RegisterInstance(myServiceSettings);
}
Now you have all the parts needed to use the settings wherever you need them.
Now where you store the settings, is IMHO usually tied to the resulting build, e.g. an executable. I do not want my class libraries to retrieve the settings from a database or configuration file, they only consume the settings instances I give them.
This technique is really easy to implement if you use the ASP.NET Core Configuration abstractions as described here: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-6.0

.Net core Dependency injection - with parameters

NOTE: This example has been simplified
I have got a Client's Contact table and wanted to retrieve specific client contact information from DB. The code I typed belove brings me all contact details. I wanted to use a parameter to only bring me specific client contacts.
I used IClientContactRepository interface like this
public interface IClientContactRepository
{
IQueryable<ClientContactModel> ClientContacts { get; }
}
And i used this class to retrive data from database with dapper
public class ClientContactRepository : IClientContactRepository
{
private readonly IConfiguration configuration;
private List<ClientContactModel> ClientContactList {get;set;}
public ClientContactRepository(IConfiguration config)
{
configuration = config;
SqlConnection conn = new SqlConnection(configuration["ConnectionString"]);
using (var connection = conn)
{
ClientContactList = connection.Query<ClientContactModel>("Select * FROM ContactTable ").ToList();
}
}
public IQueryable<ClientContactModel> ClientContacts => ClientContactList;
}
In my Startup class
services.AddTransient<IClientContactRepository, ClientContactRepository>();
My QUESTION is: can I pass the client's id parameter to the constructor.
I tried this: add a parameter to the constructor
public ClientContactRepository(IConfiguration config, int clientId)
and tried to start up class.
services.AddTransient<IClientContactRepository, ClientContactRepository(int,i)>()
Didn't work....
Can someone help me how to pass parameter please?
Yes, but where are you getting the client ID from - is it a configured value that will be static for the lifetime of the application? If so, you can use the AddTansient method overload that accepts a factory delegate to create the objects.
The better way (will cover all use cases) is registering the type that can provide that information (create one if no such type exists) with the DI container and use that as a parameter in the constructor of your repo.
As an example, let’s say you get your client ID from a claim, so the type you need to inject is IPrincipal:
services.AddScoped<IPrincipal>(
provider => provider.GetService<IHttpContextAccessor>()
.HttpContext
.User);
You would then inject the IPrincipal into your repo constructor and retrieve the client ID. An even better way would be to create your own type “ClientIdAccessor” which is responsible for providing the client ID. You would then not have a dependency on IPrincipal when testing your repo and the implementation of this new type would only depend on external libraries for your asp.net core implementation.
Side note: are you certain you want to use AddTransient for your repo? Usually you’d want to use the same repo object for the lifetime of the request (I.e. AddScoped).

Alternative to Session Variable

I am trying to find an alternative to using a session variable. In my solution I have a project that is referenced by an ASP.NET web application and a console application. Both these applications can make changes to data and when a change is made the ID of the user making the change is logged against that row.
So if it was just the ASP.NET app making changes, it could do something like myObj.LastUpdatedByID = Session["userid"]. Given that the command line app needs to make changes and doesn't have a session, what alternative could I use that has the equivalent of session scope in ASP.NET but is also available in the console app scope?
I've had a look at MemoryCache, but that seems to be application level in ASP.NET.
I don't want to go down the line of passing the user ID through to each call.
Would doing something like checking for a HttpContext and if there is, pull from the session and if there isn't, pull from MemoryCahce? Or is there a better way of doing it?
EDIT:
The user ID is specifically set in the console app depending on what action is being carried. The console app is used for automated processes and there are numerous actions it undertakes. So for example, the sending email process would be carried out by user ID 1 and the delete old files process would be carried out by user ID 2. In some instances, the user ID would be set to the user ID that last made the change to that row of data.
EDIT:
Some example code (stripped for brevity). You can see I am using the MemoryCache here, which as I understand would be application wide and therefore not usable in ASP.NET app:
public class Base(
{
private int auditID = -1;
public int AuditID
{
get
{
if (this.auditID <= 0)
{
ObjectCache memCache = MemoryCache.Default;
this.auditID = ((int)memCache["CurrentUserID"]);
}
return this.auditID;
}
}
}
public class MyObject : Base
{
public int LastUpdatedByID { get; set; } = 0;
public bool Save()
{
bool b = false;
this.LastUpdatedByID = this.AuditID;
//Call to DB here...
return b;
}
}
If the data needs to be persistent across application then you can't use Session or HttpContext.Cache since those are dependent on current HttpContext which you don't have in console app.
Another way, would be to store the data in some persistent data store like Database or distributed cache like Redis Cache / Azure Mem Cache

Web Api User Tracking

I am in need of help with Web Api.
I am setting up a multi tenant system when each tenant has there own database of data using code first EF and web api (so that I can create multiple app platforms)
I have extended the standard ASP.NET Identity to include a client id and client model which will store all tenants and their users.
I have then created another context which tracks all the data each tenant stores.
Each tenant holds a database name which I need to access based on the authenticated user.
Not getting the user id from each api controller seems easy:
RequestContext.Principal..... etc then I can get the client and subsequently the client database name to pass to the database context however I am trying to implement a standard data repository pattern and really hate repeating myself in code yet the only way I see it working at the moment is to:
Application calls restful api after authorisation
Web Api captures call
Each endpoint gets the user id and passes it to the data store via the interface and subsequently into the data layer retrieving the database name for the context.
What I have a problem with here is each endpoint getting the user id. Is there a way to "store/track" the user id per session? Can this be achieved through scope dependency or something similar?
I hope that makes sense but if not please ask and I will try to clarify further, any help will be greatly appreciated.
Thanks
Carl
ASP WebApi does not have a session context. You may use a cookie or a request token identifier (pass this token back from login and use this token as a parameter for further API calls).
This is something I've developed some time ago. I'm simply creating a new class deriving from ApiController and I'm using this class as a base for all other API class. It is using the ASP.NET cache object which can be accessed via HttpContext. I'm using the current user-id as a reference. If you need something else, you may use another way of caching your data:
public abstract class BaseController: ApiController
{
private readonly object _lock = new object();
/// <summary>
/// The customer this controller is referencing to.
/// </summary>
protected Guid CustomerId
{
get
{
if (!_customerId.HasValue)
{
InitApi();
lock (_lock)
{
if (User.Identity.IsAuthenticated)
{
Guid? customerId = HttpContext.Current.Cache["APIID" + User.Identity.Name] as Guid?;
if (customerId.HasValue)
{
CustomerId = customerId.Value;
}
else
{
UserProfile user = UserManager.FindByName(User.Identity.Name);
if (user != null)
{
CustomerId = user.CustomerId;
HttpContext.Current.Cache["APIID" + User.Identity.Name] = user.CustomerId;
}
}
}
else
{
_customerId = Guid.Empty;
}
}
}
return _customerId.GetValueOrDefault();
}
private set { _customerId = value; }
}
// ... more code
}
Do not blame me on the "lock" stuff. This code was some kind of "get it up and running and forget about it"...
A full example can be found here.
Maybe I am far from truth but Web API is state less so you dont really have a session to track

Isolate users in signalR hub by domain

I have a web application that is a single IIS installation (This isn't changing), but has a dynamic collection of subdomains. Each subdomain has its own user accounts.
The problem I am running into is that when I run signalR on it, it treats all sub-domains as the same domain, so users who just so happen to have the same user name will get each others messages.
This is causing me an security violation issue between domain accounts.
So far my best guess solutions for this have different levels of risks and problems.
Each user gets their own group, build the group name with the sub-domain name + user name.
List item this minimizes the risk of collision but doesn't remove it.
Using a Guid for the domain name, and reserving the first n-characters for the guid reduces the risk even further, but now for each user online I now have a group formed.
On the owin start, spin up a new hub that represents each domain.
Each time I add a subdomain, I will have to restart the application to add the new hub. Right now, I don't have to do anything to add subdomains the DNS is supporting the wildcard, and the host header in IIS is blank. All works except for the lack of subdomain awareness in SignalR.
Build a custom hub class, that makes the client collection domain aware, like the rest of the application.
This seems to be the cleanest, but by far, most time consuming. It also poses the highest risk of bugs, since I will have to compose a larger collection of QA tests beyond the TDD unit testing.
Last option, don't use SignalR, build my own long poll API.
This is the hardest one to accept, since it is the highest bandwidth and most exposed process. A basic survey of our target users shows that they are using websocket supporting browsers, so why would we purposely increase bandwidth or create new latency.
To see this failure, just grab the simple chat demo at ASP.NET/SignalR, and run it on your local computer under two different browsers (FF and IE for my core tests), and have one call http:\localhost and the other call http:\yourcomputername. You will need IIS not IIS Express for a proper test.
My 2 cents: build your own implementation of IUserIdProvider, from there it should be easy to inspect each request and generate a unique user id across multiple domains which you would return, this way SignalR would know to whom to associate each request correctly. It'd be a simple and not invasive solution. You can check here for more detail.
I know this is a bit late, however I've also come across this issue and I've since solved it using groups, however they way I did it was to implement IHub myself and then when set Clients is called wrap the value in my own implementation of IHubCallerConnectionContext<dynamic> and then use a key to isolate all the calls made using the already available methods. Here's an example of what that class looked like:
internal class ClientsDatabaseIsolator : IHubCallerConnectionContext<object>
{
private readonly string _database;
private readonly IHubCallerConnectionContext<dynamic> _clients;
public ClientsDatabaseIsolator(string database, IHubCallerConnectionContext<dynamic> clients)
{
if (database == null) throw new ArgumentNullException(nameof(database));
this._database = database;
this._clients = clients;
}
private string PrefixDatabase(string group)
{
return string.Concat(_database, ".", group);
}
public dynamic AllExcept(params string[] excludeConnectionIds)
{
return _clients.Group(_database, excludeConnectionIds);
}
public dynamic Client(string connectionId)
{
return _clients.Client(connectionId);
}
public dynamic Clients(IList<string> connectionIds)
{
return _clients.Clients(connectionIds);
}
public dynamic Group(string groupName, params string[] excludeConnectionIds)
{
return _clients.Group(PrefixDatabase(groupName), excludeConnectionIds);
}
public dynamic Groups(IList<string> groupNames, params string[] excludeConnectionIds)
{
return _clients.Groups(groupNames.Select(PrefixDatabase).ToList(), excludeConnectionIds);
}
public dynamic User(string userId)
{
return _clients.User(userId);
}
public dynamic Users(IList<string> userIds)
{
return _clients.Users(userIds);
}
public dynamic All
{
get { return _clients.Group(_database); }
}
public dynamic OthersInGroup(string groupName)
{
return _clients.OthersInGroup(PrefixDatabase(groupName));
}
public dynamic OthersInGroups(IList<string> groupNames)
{
return _clients.OthersInGroups(groupNames.Select(PrefixDatabase).ToList());
}
public dynamic Caller
{
get { return _clients.Caller; }
}
public dynamic CallerState
{
get { return _clients.CallerState; }
}
public dynamic Others
{
get { return _clients.OthersInGroup(_database); }
}
}
then in OnConnected I add the connection to the _database group
now in my hub when I call Clients.All.Send("message") that will really just send messages to the group specified when the ClientsDatabaseIsolator was created, it would be like calling Clients.Group(database).Send("message") that way you don't have to think about it. I'm not sure if this is the best solution but it worked for us.

Categories

Resources