My project is an ASP.Net MVC Website that has been deployed to multiple customers, and each customer has their own SQL Server database. As you'd expect, the data in each database is different, so even in development we have separate databases for each customer. The database schemas are identical. We have a custom database context extending DbContext and managed by Unity.
Currently we switch between these databases by either renaming the databases themselves, or editing the connection strings in Web.config (technically in ConnectionStrings.private.config, which we pull into Web.config using a configSource attribute).
How can I develop the same codebase against multiple customer databases at once, without editing any configuration when switching database? For example, I might have one customer at http://customerone.localhost/ and another at http://customertwo.localhost/ (different port numbers would be OK too). Each accessing their own database but sharing the c# code.
In apache I can use <If> or SetEnvIf to set configuration based on the url, but there doesn't seem to be an equivalent in IIS. When I try to change it in c# code I am told it is read only.
To be clear I don't want any changes to be written back to web.config - I just want the custom database connection string to be used for the life of the request.
Related
My client asked to build an API in ASP NET Core for a business management application. Nonetheless, he made me an unusual requirement for me: He needs each user to have their own database. All databases have the same structure of tables and relationships and all are MySql.
This means that each user will need their own connection string and some way to store that information.
In addition, other users will be added in the future, each with a bank created just for their use.
Anyway, I don't know if this is possible, but if it is, how would I do it?
I have thousand of SQL Server databases (one for each client). When we decide to push on production, we have most of the time changes in databases, the web API and the web application.
The problem is the time it takes to deploy everything, especially the databases. We are using Code First migration and MVC .NET and SQL Server, all with the latest version. It is a SaaS. And the code first migration process is able to update the database one-by-one.
The API and the web application are deployed very quickly within a few seconds. However, the databases are all updated within about 30 minutes. During that time some users got errors and cannot use the software because the API tries to target non-updated database. And worse, if during the databases update, something fails and stop, the non-updated users are stuck until we fix the issue and update the rest of the databases.
Any idea how to solve this problem and make clients happy?
PS: The web application doesn't access to the database, but only the API.
This question is somewhat opinion-based. The maintenance window approach is the easiest. If you want to do live-updating, another way would be:
Keep a version number in the database
Allow running multiple versions of the Web API side-by-side
Choose which version of the API to use by looking at the version in the database
Determine if the Web API's public interface is stable. If it is not, also find a way to allow running multiple web sites side-by-side and choose which one based on the version in the database
The most maintainable way to accomplish this would probably be to have at least 3 servers:
One backend server which hosts the old version
One backend server which hosts the new version
The frontend server which routes users to the proper backend server based on the current version.
The routing could take place only at login, or you could do something more fancy such as redirecting the logged-in user when an upgrade is detected. Obviously none of this deals with what happens to one particular client during the actual upgrade of that client's database. You'll still need to address that separately.
I have an ASP.NET MVC web application working with Entity Framework and I have the same database schema in two different database engines (Oracle and MySQL). The database is the same in Oracle and MySQL. The application should work with this two providers because I have two different scenarios.
When I want to work in the Oracle Scenario I have to change manually the Web.Config to put the correct ConnectionString, the correct provider for member authentication, roleManager ..., and I have to delete the Database Model (edmx file) and recreate it for Oracle.
When I want to change working from Oracle to MySQL I have the same problem. I have to change the web.config to put the correct providers and the connection string and I have to recreate the database model (edmx file) for MySQL model.
Is there any way to avoid this heavy and boring task every time that I want to change the database?
Basically it is possible to create multiple Web.config and to automatically select the right one depending on the current used environment.
Additional information can be found here.
We are currently in process of developing SAAS application codename FinAcuity which will be hosted on Windows Azure platform and primary database will be SQL Azure.
Here are some Technical Specifications for our product:
Development Environment - Asp.Net 4.0 with MVC 3 (Razor), Entity Framework
Database - SQL Azure
Here is our Business Case:
Our product is a SAAS product, And as it will contains Financial Data of Client, we are going to provide separate database to each client to achieve higher level of multi-tenancy, Data Isolation & Security.
Now Client can create multiple companies under their account and these companies will be separated by Schemas under particular Client DB.
Note: Table structure will be same for each Schema.
Here are some scenarios to will give you a deeper view of our application processes.
Scenario 1:
To provision new database upon client registration, we are going to run Store Procedure that will create database with basic structure.
Our Doubt: Is this correct way of doing it on SQL Azure or there is some other way for it?
Scenario 2:
For accessing multiple schemas under client database, we have dynamically generated SSDL file for individual schema and used that file for connection.
Our Doubt: Is there any other way of doing it, like using same SSDL file instance for multiple connections and passing Metadata for connection?
Scenario 3:
As our application supports ad-hoc querying and dynamic table creation from Excel file, we are going to provide wizard that will run Store Procedure in back-end and create that table dynamically from Excel file upon header selection from wizard under particular schema for client database.
Our Doubt: Suggest us a better way of doing it, if any?
Scenario 4:
Now as the new table is added to schema, we have to update EDMX file to get data from that new created table. To do this we are going to run Store Procedure that will fetch data from newly created table.
Our Doubt: Is there any way of updating EDMX file runtime and getting data?
Need advice for best possible solution for each scenario that is listed above.
Thank you in advance.
Best Regards - Sahil
I think this is a little too much for 1 single question.
And I personally think you look at it from a wrong perspective. Why did you choose Entity Framework and SQL Azure? Do you really think these are the best technologies to address your problems?
I suggest you take a step back and investigate what other technologies could be used. Because what you're asking here looks like a schema-less solution, and SQL Azure / SQL Server wasn't built for that IMHO.
You can start by looking at a NoSQL (schema-less, key value store) solution, possibly in Windows Azure. There's a whitepaper that will get you started on that: NoSQL and the Windows Azure platform -- Investigation of an Unlikely Combination
Windows Azure Table Storage is a key-value store that could solve some of your issues:
Customer isolation / Multiple schemas: WAZ Table Storage supports partitions, you could partition your data per customer instead of putting all the data together.
Provisioning: No need to provision anything. Get a storage account and you can get started. Then you can simply have some code that writes data in a specific partition for that customer.
Cost (not mentioned in the question): Table Storage is much cheaper than SQL Azure
...
I am currently working on a ASP .Net MVC 3 application to do some database manipulations on a specific set of Metadata tables in a database. I'll basically be doing inserts, updates deletes etc. from the application. However, the aim is to migrate these Metadata tables across different databases (and most likely servers as well) so that we can use the same Methodology across clients.
Right now I am using Linq2Sql to generate my ORM around these specific metadata tables. The aim is to use the same application to manipulate the data across servers and databases. What is the best practice/approach for this?
The simplest solution that I've thought of is a constructor for my DataContext where I could use user input (server + database name +userID +password) to manipulate a connection string and pass it into my DataContext. Since the Framework should be the same across all databases and servers, this should work in theory. However, I'm not where the best place to maintain the modified connection string is (The session? a cookie?).
What is the best practice around this kind of server/database switching in a .Net app?
You're going down the right path by passing the connection in the constructor for the context. As for maintaining the connection string, I suspect that would depend on how you are managing the other site information for your various sites. If you are managing that in a central database, save the site specific value in that central database and use the same persistance you are using for those site settings while the user is visiting your site. Just make sure not to pass server specific information to the client (thus a cookie would not be the recommended approach).