Brief introduction:
I have this ASP.NET Webforms site with the particularity that it doesn't have only 1 database, it has many.
Why? Because you can create new "instances" of the site on-the-fly. Every "instance" share the same codebase, but has its own database. These all databases have the same schema (structure) but of course different data. Don't ask 'why don't you put everything in one database and use InstanceId to know which is" because it's a business policy thing.
The application knows which instance is being requested because of the url. There is one extra database to accomplish this (I do know its connection string in design time). This database has only 2 tables and associates urls to 'application instances'. Then, of course, each 'application instance' has its associated connection string.
Current situation: There is nothing being used right now to help us with the job of mantaining every instance database in sync (propagating schema changes to every one). So we are doing it by hand, which of course it's a total mess.
Question: I'd like to use a rails-migration way to handle schema changes, preferably migratordotnet, but could use any other if it's easier to setup.
The problem is that migratordotnet needs the connnection string to be declare in the proj.build file and I don't know them until runtime.
What it would be REALLY useful is some kind of method running on Application_Start that applies the latest migration to every database.
How could this be done with migratordotnet or any similar? Any other suggestion is thanksfully welcomed.
Thank you!
Since this is an old question, I assume that you have solved the problem in some manner or another, but I'll post a solution anyway for the benefit of other people stumbling across this question. It is possible to invoke MigratorDotNet from code, rather than having it as an MSBuild target:
public static void MigrateToLastVersion(string provider, string connectionString)
{
var silentLogger = new Logger(false, new ILogWriter[0]);
var migrator = new Migrator.Migrator(provider, connectionString,
typeof(YourMigrationAssembly).Assembly, false, silentLogger);
migrator.MigrateToLastVersion();
}
RedGate has a SQL Comparison SDK that could be used. Here is a Case Study that looks promising, but I can't tell you anthing from experience as I haven't used it. Download the trial and kick the tires.
You could use Mig# to maintain your migrations in your C# or .NET code: https://github.com/dradovic/MigSharp
Check out Fluent-Migrator.
Related
The title is not so accurate, but I couldn't come up with a better one.
I’m trying to write a MySQL Connector for MS‘ Forefront Identity Manager (FIM is basically a sync engine that synchronizes identities between various data sources using a meta directory). But I’m having difficulties to come up with an appropriate design.
Let’s say I want to import user data from a db into FIM’s metaverse. A user object has various attributes like firstname, lastname, address etc. In the database these attributes can be distributed between multiple tables. FIM ultimately needs these attributes to be merged into one object. So the user needs to configure the connector to tell it how the data is stored in the DB.
I was wondering what would be the “best” way to represent this configuration. Two alternatives come to (my) mind:
I could just save a select query that merges/joins the data, so that the result is a single “table” with all the desired attributes. The problem with this is that I think I would have to do some kind of parsing on this query-string to create a fim-compatible-schema out of it (which is basically the name of the object type (f.e. “person”) and a list of attributes). This schema needs to be creatable from the query-string alone without actually executing the query (I could execute some fake queries if that would simplify the process).
I could create some classes to represent the database schema, i.e. the tables and relationships. Since I’m not that experienced with MySQL (or databases at all for that matter) I’m running the risk of missing some special cases. Also it might be some kind of overkill, since the schema can be assumed as fixed once it's configured.
Does anyone have same advice on which alternative to choose and how to tackle the problems that would come with it? Or is there another – better – alternative I didn’t think of? Any advice would be greatly appreciated!
If something is not clear, please let me know.
Edit: Since there have been some questions on the use case, I'm going to elaborate a bit:
As I've said, I'm developing a Management Agent for FIM. FIM provides a so called Extensible Connectivity Management Agent, which is basically one single class implementing a few interfaces. (See this technet guide for a sample implementation).
Since I want to develop a generic agent for managing identities in a MySQL database, I don't know the database layout at compile time. When the enduser wants to use the management agent, he needs to decide, which attributes of the identities he'd like to manage. So I need to give the user some way to configure the management agent. My main question is, how to design the classes to save this configuration.
Lets look at a simple example:
Say you want to manage employee identities. To keep it simple, we have three attributes:
firstName
lastName
department
In this example case it could be f.e. just one single table with 4 columns (the attributes plus an id). But it could also be the much better design, which uses two tables, one user table and one department table, using a 1:1 relation to define the users department.
FIM requires me to consolidate these attributes in one object. It provides a class CSEntryChange which has an AttributeChanges collection member. I would then create some instances of AttributeChange (which basically contains the attribute name und it's value) and add them to the collection. So the user-editable configuration must tell the management agent how it can get the users with all defined attributes from the db and how to create and modify users in that database.
So ideally I'd have an intance of some "MySQLSchema" class (which is configured by the user up front), that could return a List<CSEntryChange> (I wouldn't actually use the CSEntryChange class for the sake of decoupling, but you should get the point) that contains all users in the db (pagination might be a requirement but I can figure that out later). In addition I'd like to able to pass it a CSEntryChange which would result in the corresponding database entries beeing updated (or created if not yet present).
I hope this clear it up a bit more :)
I think that your real question is, "How to access MySQL entities over C#?"
To begin with, I hope you are building this in as a MVC application.
I would suggest sticking to a full Microsoft stack for purposes of learning and ease of implementation.
With this in mind, you will want to create an EntityFramework MySQL data provider in the following steps:
Create a new project and and EntityFramework either through the Nuget package manager UI or package manager console by typing Install-Package EntityFramework -Version 6.0.2 (and add a reference to this project from your web project). Look half way down the page for "Configure EntityFramework to work with a MySQL database".
Install the MySQL provider for entity framework through the Nuget package manager UI or by typing Install-Package MySql.Data.Entity in the package manager console
The next step requires understanding of db configuration changes, that are nicely detailed here - Configure EntityFramework to work with a MySQL database.
You should end up with a nice class structure which will allow you to traverse your entities' navigation properties through EF.
Depending on the level of security your application requires, you may also want to create data transfer objects (DTOs) that contains only the data required for your remote calls - keeping your data calls efficient.
This is by no means a definitive guide on how to do this, but hopefully gives you a start in the right direction.
With regards to your step #1 above:
I could just save a select query that merges/joins the data, so that
the result is a single “table” with all the desired attributes. The
problem with this is that I think I would have to do some kind of
parsing on this query-string to create a fim-compatible-schema out of
it (which is basically the name of the object type (f.e. “person”) and
a list of attributes). This schema needs to be creatable from the
query-string alone without actually executing the query (I could
execute some fake queries if that would simplify the process).
I am slightly confused by this. Are you saying that you want to dynamically update your database schema based application requests?
You can use NHibernate with MySQL, and NHibernate is a full featured ORM, where C# classess maps with your MySQL tables, and the rest will be a breeze, once you get a hang of NHibernate.
A sample is here for your reference.
http://www.codeproject.com/Articles/26123/NHibernate-and-MySQL-A-simple-example
When you use the MySQL Connector/Net you can also use Entity Framework like this example from MSDN:
using (var db = new BloggingContext())
{
// Create and save a new Blog
Console.Write("Enter a name for a new Blog: ");
var name = Console.ReadLine();
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();
}
I have some experience with .NET <-> MySQL communication and I've used Entity Framework in the past for the communication - I had a lot of problems with it and performance issues and soon came to regret using it (this was 1-2 years ago, so may be they fixed it up). Of course, using an ORM framework adds a layer on top of your db communication which in my case proved to be not desired in terms of performance and flexibility.
Finally, I chose to take the following approach:
1) Create models with POCO classes as you would do with Entity Framework. Those models may or may not include relationships - it is up to your preference. I prefer to only add the relationships when I actually need them (so some objects may have their db relationships in the POCO's and some may not). I chose this because it lowers the complexities of when to pre-load the relationships and when not. Basically, if you don't need it - don't add it.
2) Create DAL layer (for example, using the repository pattern) that accepts and works with those objects and fires direct queries to MySQL. No EF required for this - you just need to install the Connector/NET for MySQL and you are ready to go.
A quick example of this would be the following (note: example is of the top of my head and it is just to illustrate the classes. I would use command parameters as well to prevent injection and so on):
public class Person{
public string Name {get;set;}
}
public interface IPersonRepository{
void AddPerson(Person p);
}
public class PersonRepository{
public void AddPerson(Person p){
using(var connection = new MySqlConnection("some connection string"){
connection.Open();
var command = new MySqlCommand(connection);
command.Text = string.Format("insert into Person (Name) values ({0})", p.Name)l
command.ExecuteNonQuery();
}
}
}
The benefits of this approach for me are:
Performance - my application need to insert large amounts of data int MySQL. Entity Framework could not cope with this. If your application doesn't handle a lot of data you might be alright with EF.
Flexibility - writing my own queries allows me to have better control over the communication. You can choose, for example, to use bulk inserts in MySQL (from file - really powerful and fast when you need to handle large amounts of data) for which you will need to bypass Entity Framework. I also found out that EF generates some funky queries
The main drawback is, of course, more work - you will get some things for "free" with the Entity Framework.
So, I can recommend the following:
Consider the amounts of data that you need to handle and make a small exercise application with those amounts. How does EF (or any other ORM) handle it? What about direct queries to the database? That will give you a somewhat accurate idea of how the communication will perform.
Consider how much time you have for building this application - if you are looking for a quick solution and are willing to sacrifice a bit of performance - go for EF or another ORM framework. If you have more time on your hands and would like to make a flexible solution - go for direct queries to the database.
Good luck!
Use Entity Framework Code First.
http://msdn.microsoft.com/en-us/data/jj193542.aspx
It is still a lot of work, but I think this is the quickest approach.
Create a C# classes according to the user and create the DB schema from those classes.
I've successfully built many applications that use the ASP.NET Membership Provider. I'm diving into a new project that is multi-tenant and I'm making scalable by partitioning user accounts across multiple SQL Server databases.
Each database has the Memberships schema installed, and I have no problem setting any one of the databases as my Membership provider by adding the information in my Web.Config file. However, my goal is to use a master database as a look-up table. This database has a table called "Accounts" and I store the membership database for that account in a field called "MebershipPartition" that is associated with users on that account in that table (Each account can have many users).
Here is the basic architecture, with extraneous information removed:
What I am trying to do is set my Membership provider to that particular provider database via an updated connection string (or even just the databaseName since this is all on the same SQL Server) AFTER I do the lookup. This means that it has to happen outside of the Web.Config and the Global.asax file.
I also cannot pre-populate my Web.Config with a list of available connection strings because the system may generate a new Membership Database whenever certain criteria is met, such as reaching a certain cap on a previous database or of a particular account requires that it's users are isolated to an individual database. So this has to be completely dynamic.
In my research I've tried a number of methods which are detailed in the following posts:
StackOverflow, ASP.NET Forums and ASP.NET Forums
None of the above solutions work as desired. Either they rely on the solution to execute too early on in the application life-cycle, or they use hard coded values for the connection string and only solve the problem of removing the reliance on the Web.Config.
IDEALLY I would like to expose a single method that I can call at any time from a C# class to simply switch the database that the Membership system uses as needed, for example:
string Membership_DB_1 = "Server=[server];Database=database1;User ID=[userid];Password=[password];Trusted_Connection=False;Encrypt=True;"
string Membership_DB_2 = "Server=[server];Database=database2;User ID=[userid];Password=[password];Trusted_Connection=False;Encrypt=True;"
string Membership_DB_3 = "Server=[server];Database=database3;User ID=[userid];Password=[password];Trusted_Connection=False;Encrypt=True;"
Membership.SwitchConnectionString(Membership_DB_1);
Membership.SwitchConnectionString(Membership_DB_2);
Membership.SwitchConnectionString(Membership_DB_3);
I'd like to thank the community in advance, I have spent many days (and now weeks) wrestling with this scenario and I am near the point of pivoting to another solution!
UPDATED:
I've also been looking into the following solution, but again - it requires setting up the Web.Config with all potential connection strings in advance, which is not desirable from a management perspective:
Another Hard Coded Solution
Is this just not possible? Next I'll be looking into rolling my own version of the Membership Provider by using the Provider Toolkit Samples provided by Microsoft, but there aren't many samples or documentation available, and the actual downloads pages are missing in action! Scott Guthrie posted about it in 2005, but as you can see the reference in the article is now missing. I ended up finding a Download Link to the MSI here, but I'm not sure that the code is well maintained or even still relevant...
I was trying to avoid having to get this deep into the Provider components, but I plan to keep marching on! Will keep this thread updated as I go for any others looking to do the same!
Your proposed approach for switching connection strings:
Membership.SwitchConnectionString(Membership_DB_1);
doesn't sound practicable. Don't forget, there is a single static default provider which may be accessed concurrently from multiple request threads, and if you call this method you'll be changing the provider for all threads, which is unlikely to be what you want.
I must admit I don't see how your proposed architecture will increase scalability, but if you do want to do this, I think you're going to have to write a custom Provider.
I wonder what you are using for updating a client database when your program is patched?
Let's take a look at this scenario:
You have a desktop application (.net, entity framework) which is using sql server compact database.
You release a new version of your application which is using extended database.
The user downloads a patch with modified files
How do you update the database?
I wonder how you are doing this process. I have some conception but I think more experienced people can give me better and tried solutions or advice.
You need a migration framework.
There are existing OSS libraries like FluentMigrator
project page
wiki
long "Getting started" blogpost
Entity Framework Code First will also get its own migration framework, but it's still in beta:
Code First Migrations: Beta 1 Released
Code First Migrations: Beta 1 ‘No-Magic’ Walkthrough
Code First Migrations: Beta 1 ‘With-Magic’ Walkthrough (Automatic Migrations)
You need to provide explicitly or hidden in your code DB upgrade mechanism, and - thus implement something like DB versioning chain
There are a couple of aspects to it.
First is versioning. You need some way of tying teh version of teeh db to the version of the program, could be something as simple as table with a version number in it. You need to check it on executing the application as well.
One fun scenario is you 'update' application and db successfully, and then for some operational reason the customer restores a previous version of the db, or if you are on a frequent patch cycle, do you have to do each patch in order or can thay catch up. Do you want to deal with application only or database only upgrades differently?
There's no one right way for this, you have to look at what sort of changes you make, and what level of complexity you are prepared to maintain in order to cope with everything that could go wrong.
A couple a of things worth looking at.
Two databases, one for static 'read-only' data, and one for more dynamic stuff. Upgrading the static data, can then simply be a restore from a resource within the upgrade package.
The other is how much can you do with meta-data, stored in db tables. For instance a version based xsd to describe your objects instead of a concrete class. That's goes in your read only db, now you've updated code and application with a restore and possibly some transforms.
Lots of ways to go, just remember
'users' will always find some way of making you look like an eejit, by doing something you never thought they would.
The more complex you make the system, the more chance of the above.
And last but not least, don't take short cuts on data version conversions, if you lose data integrity, everything else you do will be wasted.
I've made a local database for a C# project:
I know basic SQL commands, but haven't worked with databases in C#.
What I'd like to know specifically is:
How to read from the database (query)
How to add and update rows
The database only consists of 3 tables, so I don't think anything fancy is needed.
First, you should learn a bit about various technologies and APIs for connecting with a database.
The more traditional method is ADO.NET, which allows you to define connections and execute SQL queries or stored procedures very easily. I recommend digging up a basic tutorial on ADO.NET using Google, which may differ depending on what type of project you're creating (web app, console, WinForms, etc).
Now days, ORMs are becoming increasingly popular. They allow you to define your object model in code (such as every database table would be a class, and columns would be properties on that class) and bind to an existing database. To add a new row to a table, you'd just create an instance of a class and call a "Save" method when you're done.
The .NET framework has LINQ to SQL and the Entity Framework for this sort of pattern, both of which have plenty of tutorials online. An open source project I really like is Castle Active Record, which is built on top of NHibernate. It makes defining ORMs quite easy.
If you have specific questions about any of the above, don't hesitate to post a new question with more specific inquiries. Good luck!
Update:
I thought I'd also put in one last reference as it seems you might be interested in working with local database stores rather than building a client/server app. SQLite allows you to interact with local stores on the file system through SQL code. There's also a .NET binding maintained by the SQLite guys (which would in theory allow you to work with the other platforms I mentioned): http://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki
You can use SQLCE.
This blog will give you a good start.
http://weblogs.asp.net/scottgu/archive/2011/01/11/vs-2010-sp1-and-sql-ce.aspx
Here is a small tutorial that should be helpful to you.
You can make use of the SqlDataReader to read data
and the SqlCommand to Insert Update Delete rows from your tables.
http://www.dotnetperls.com/sqlclient
You could use it by adding following to your Startup.cs
services.AddDbContext<DemoDbContext>(options => options.UseSqlite("Filename=data.db"));
This questions actually refers to another one already asked, now I want to reformulate it :)
My issue is: There is an online shop running on MySQL database, hosted somewehre on the internet. Now I'd like to do some administration stuff from my C# application.
What I want to do: All I want is to run SQL-queries on that database and get the results as entities in my application so I can browse through them like through normal Lists/Classes and then post back the changes to the database. The problem is not the connection to the database - it works fine (using SSH and Connector/NET driver) - but the question, how to turn the SQL-results into C# classes.
I had a closer look at Fluent NHibernate and SubSonic, but I still can't figure out which one suits best or - even worse - if these are really the right approaches to my problem.
So I don't want to build an application which stores its own data in a database but gets the data it needs from a public database.
I hope I could make myself more clear this time :)
Thanks in advance!
ORM is definitely the way to, because it allows you to abstract your data access.
You may find a code generator helpful (to avoid the repetitive task of writing the classes and all their properties): NHibernate Code Generation.
This way you can still use classic NHibernae instead of Fluent Hibernate, which by the way looks pretty useful.