Until now all of my questions creating the program itself [C# Winforms] have been answered by searching the SO database. It's been an immense help really. But now i'm through with my program and I want to know what sort of procedure comes after; what is involved in the migration to an operating sql server.
I built my program with [Visual Studio 2013] to operate temporarily on a LocalDb for testing purposes. I noticed that nothing gets saved whenever I close the application, which actually saves alot of time during several test runs, but now I want it to save permanently on the published software system.
I've researched this topic abit and this is what i've come up with so far:
I would first setup the connectionstring to work with my sql server (MSSQLSERVEREXPRESS 2012/2014)
Publish my program. (For this I have prepared an install shield wizard. is this preferrable over a ClickOnce application?)
Run and prep the Sqlservice (No idea how to do this... I know, I know... but I'm really only fluent in the coding department)
Based on what I read, I would want to attach my database (.mdf service based database) through SQL management studio.
?? I'm not sure what happens after five.
Am I correct in these assumptions?
A) Would the program run just as simply as the LocalDb variant?
B) Would I finally be able to create permanent records?
C) Would the Sqlservice have to be ran each time, alongside the program?
D) What am I not seeing? What procedure am I missing?
All forms of help are appreciated. Do note that I have researched lightly about the topic, and have so far only come up with the idea to attach the Mdf to the server through SSManagementStudio and the rest is magic (so to speak).
Honestly, I know too little about sql server that I might not even be running on LocalDB, as I've read on another thread that I was merely working with Visual studio's SSData Tools (I've never consciously ran an SqlServer during the course of creating my program). But for the record, localDB is written on my connectionstring.
The problem has been [Resolved]!
FYI LocalDB is awesome and really isn't a hassle to deal with. I merely changed the COPY property below build options on the MDF file to Copy if Newer. Then I included everything I need into an InstallShield Wizard installer. Next problem came up was resolved by giving permission to use the MDF and LDF files to all users. I can now save permanent records. Thanks again Steve!
I'm currently working on a database that comes with a legacy project which uses EntityFramework (updates code based on existing database using Data Model Designer)
Currently I work on the master copy and our developers work locally using SQL Server merge-replications on their local PC.
Issue here is that we recently started doing some change work that modifies the database schema, so when we use schema comparison (visual studio SQL compare feature), there are huge number of replication sp & schema changes that basically if I do update it will corrupt the live database. So my current solution is remove the dev server replication (so that the schema goes back to what it should look like without replication changes), then do the schema compare & update, and then create a new merge replication again so our developers can continue working on the dev db.
I thought it was just one-off db schema change, but just realized it will be continuous changes at least for the next 3-6 months, so that basically make each release a big headache (if it can be called as a 'release' prep...)
My SQL & EntityFramework knowledge is limited, can anyone shed some light on this for me please?
Thanks in advance!
Whats the observed need behind merge replication in the dev environment? I understand the need for devs to have a local copy they can mess with, run tests against etc, but I'm lost on why a full Publisher-Subscriber model is needed to synchronize DB state in a dev/test environment, and it seems to be causing you more problems than it may solve given the schema is going to be malleable for a few months.
If merge replication is not a hard requirement for the dev environment, I would suggest you replace it with an alternate method of distributing changes to the local copies. If the devs are working with a full copy of the DB anyway, I see no reason not to write a script that backs up the master copy on the dev server, then pulls that file down and restores it locally. Then, changes to that schema would be accomplished with change scripts, which can be run and tested locally before being applied to the master DB, then distributed on-demand with another run of the backup/restore script.
It's a slightly more manual process and an older way to work with DBs, but it seems far more palatable to me than breaking and re-establishing replication regularly. It'll require some collaboration to make sure devs aren't trying to make a backup at the same time or making conflicting changes to local copies that will blow up on the master copy; your devs ideally should be talking to each other anyway about this kind of thing, and you might make the script smart enough to look for a recent backup before generating another.
One more thought, don't know how feasible it is given your progress to date; it's not impossible to switch from DB-First to Code-First. The conversion is basically a hybrid process of Database First and Code First; the DB is reverse-engineered as a one-time operation to generate a model similar to DB First, but instead of EDMX files, the model is written out to source code files, and changes to those model files or to mapping conventions on the context can then be aggregated and applied to the schema as migrations in typical Code First style. Assuming you prepare the live DB for migrations as well (and have the live DB in the same state as the master Dev DB prior to the model generation), this even removes the requirement of a SQL compare and update; you just apply the migrations to the live DB, same as you would to any Dev instance. The only potential gotcha is that some migrations can be written destructively, so you have to make sure what you're about to apply isn't going to clear out all the fields in a renamed column.
I am planning to use localdb for next project and thinking of making rotating backup of database (it will be detached normally and only attached when software is running). And I was thinking how to do automatic restore from backup. This required me to know if database is corrupted. And I can't find anything about this subject.
My guess, peoples don't care on that level about database integrity. Backup and maintenance is a job for IT, there are dozens of tools for SQL Express.
My database will be local on PC and not big. I want to make it very simple:
detect if database is corrupt (how?)
offer user an option to restore database from most recent backup (this is easy)
P.S.: perhaps I don't know something about sql express, localdb, linq-to-sql. This is why question is very generic.
After getting answer I'll go with simple options:
if database can't be opened and file exists - database is corrupted, offer restore (not exists - create empty database and do initial setup);
provide following options (if user noticed something unusual or get errors during update, etc.): to check database DBCC CHECKDB (0, REPAIR_REBUILD) (could be good to try first) or to restore.
Automatic detection seems costly. Rather keep weekly/montly backup for manual restoration (if that would be really necessary for old data) and create mdf-file backup every run to minimize loss to 1 day maximum (which in my case is totally fine for an abnormal situation like database corruption).
I am planning to use localdb for next project a
You mean you plan to run a database that is explicitly not made for running in production on production?
And I was thinking how to do automatic restore from backup. This required me to know if
database is corrupted. And I can't find anything about this subject.
Read SQL documentation. Backusp, restores are all doe using standard SQL Commands. Totally easy to do that.
I want to make it very simple:
detect if database is corrupt (how?)
DBC Checkdb command. Can take a lot of time, though, depending on the database.
perhaps I don't know something about sql express, localdb, linq-to-sql.
This can be solved by reading the documentation. I would for example never use localdb on anything production.
Read the documentation? It starts with the first line:
Microsoft SQL Server 2014 Express LocalDB is an execution mode of SQL Server Express
targeted to program developers.
What are the best practices for database refactoring with codefirst EF4?
I am interested to hear how people change the classes and the database when the RecreateDatabaseIfModelChanges option is not feasible. Migration of data will need to occur.
Currently Microsoft has a solution for doing this with model first:
http://blogs.msdn.com/b/adonet/archive/2010/02/08/entity-designer-database-generation-power-pack.aspx?PageIndex=2#comments
Does anyone have a good strategy for code first?
The EF team have been working on a migrations feature for EF that should solve this problem.
http://blogs.msdn.com/b/efdesign/archive/2010/10/22/code-first-database-evolution-aka-migrations.aspx
Scott Gu said on his recent tour around Europe that they should be releasing this feature soon. I'm holding my breath.
EXCITING UPDATE:
This has now been released as a CTP:
http://blogs.msdn.com/b/adonet/archive/2011/07/27/code-first-migrations-august-2011-ctp-released.aspx
I am working on database context initializer which will notify webmaster if model and db schema are out of sync and will show what differs. This can by useful for developers who prefer to have complete control both over code-first model and database schema. Check it out:
https://github.com/rialib/efextensions
In my CodeFirst application, local builds have a app.config flag that denotes not being in production. When I'm not in production it completely nukes and recreates the database. Since my production database user does NOT have permissions to drop the database even if my web.config transform is missed somehow (thus EF tries to recreate the database) my production database will not be deleted, and instead an exception will be thrown.
My workflow goes like this:
Check out production branch of code with latest changes
Quickly smoke/regression test (this should already be done prior to checking code into the production branch, but just in case)
Download the latest backup of my production database and install it on my local SQLEXPRESS server
Run Open DBDiff between the database my local code created (even though it's production code, since it's local it recreates the database) against the production backup.
Review scripts generated and attempt to run them against the production backup
Assuming no errors occurred, overwrite the database that the code generated with the production backup and do testing against the production data to make sure all data is intact still;
Run scripts on the real production database.
Step #2 automatically creates a new, clean database based on the latest data model so I always know I have an up to date database that doesn't have artifacts from development efforts that may not be production ready yet.
I am looking for a way to do daily deployments and keep the database scripts in line with releases.
Currently, we have a fairly decent way of deploying our source, we have unit code coverage, continuous integration and rollback procedures.
The problem is keeping the database scripts in line with a release. Everyone seems to try the script out on the test database then run them on live, when the ORM mappings are updated (that is, the changes goes live) then it picks up the new column.
The first problem is that none of the scripts HAVE to be written anywhere, generally everyone "attempts" to put them into a Subversion folder but some of the lazier people just run the script on live and most of the time no one knows who has done what to the database.
The second issue is that we have 4 test databases and they are ALWAYS out of line and the only way to truly line them back up is to do a restore from the live database.
I am a big believer that a process like this needs to be simple, straightforward and easy to use in order to help a developer, not hinder them.
What I am looking for are techniques/ideas that make it EASY for the developer to want to record their database scripts so they can be ran as part of a release procedure. A process that the developer would want to follow.
Any stories, use cases or even a link would helpful.
For this very problem I chose to use a migration tool: Migratordotnet.
With migrations (in any tool) you have a simple class used to perform your changes and undo them. Here's an example:
[Migration(62)]
public class _62_add_date_created_column : Migration
{
public void Up()
{
//add it nullable
Database.AddColumn("Customers", new Column("DateCreated", DateTime) );
//seed it with data
Database.Execute("update Customers set DateCreated = getdate()");
//add not-null constraint
Database.AddNotNullConstraint("Customers", "DateCreated");
}
public void Down()
{
Database.RemoveColumn("Customers", "DateCreated");
}
}
This example shows how you can handle volatile updates, like adding a new not-null column to a table that has existing data. This can be automated easily, and you can easily go up and down between versions.
This has been a really valuable addition to our build, and has streamlined the process immensely.
I posted a comparison of the various migration frameworks in .NET here: http://benscheirman.com/2008/06/net-database-migration-tool-roundup
Read K.Scott Allen's series of posts on database versioning.
We built a tool for applying database scripts in a controlled manner based on the techniques he describes and it works well.
This could then be used as part of the continuous integration process with each test database having changes deployed to it when a commit is made to the URL you keep the database upgrade scripts in. I'd suggest having a baseline script and upgrade scripts so that you can always run a sequence of scripts to get a database from it's current version to the new state that is needed.
This does still require some process and discipline from the developers though (all changes need to be rolled into a new version of the base install script and a patch script).
We've been using SQL Compare from RedGate for a few years now:
http://www.red-gate.com/products/index.htm
The pro version has a command line interface that you could probably use to setup your deployment procedures.
We use a modified version of the database versioning described by K. Scott Allen. We use the Database Publishing Wizard to create the original baseline script. Then a custom C# tool based on SQL SMO to dump the stored procedures, views and user functions. Change scripts which contain schema and data changes are generated by Red Gate tools. So we end up with a structure like
Database\
ObjectScripts\ - contains stored procs, views and user funcs 1-per file
\baseline.sql - database snapshot which includes tables and data
\sc.01.00.0001.sql - incremental change scripts
\sc.01.00.0002.sql
\sc.01.00.0003.sql
The custom tool creates the database if necessary, applies the baseline.sql if necessary, adds a SchemaChanges table if necessary and applies the change scripts as necessary based on what's in the SchemaChanges table. That process occurs as part of a nant build script each time we do a deployment build via cc.net.
If anyone wants the source code to the schemachanger app I can throw it up on codeplex/google or wherever.
If you are talking about trying to keep database schemas in sync, try using Red Gate SQL Comparison SDK. Build a temp database based on a create script (newDb) - this is what you want your database to look like. Compare newDb against your old database (oldDb). Get a change set from that comparison and apply it using Red Gate. You could build this upgrade process into you tests, and you can try and get all the devs to agree that there is one place where the create script for the database is kept. This same practice works well for upgrading your database across several versions and running data migration scripts and processes between each step (using an XML doc to map the create and data migration scripts)
Edit: With Red Gate technique, you only are concerned with create scripts, not upgrade scripts since Red Gate comes up with the upgrade script. It will also let you drop and create indexes, stored procedures, functions, etc.
Go here:
https://blog.codinghorror.com/get-your-database-under-version-control/
Scroll down a bit to the list of 5 links to the odetocode.com website. Fantastic five-part series. I would use that as a starting point to get ideas and figure out a process that will work for your team.
You should consider using a build tool like MSBuild or NAnt. We use a combination of CruiseControl.NET, NAnt, and SourceGear Fortress to handle our deployments, including SQL objects. The NAnt db build task calls sqlcmd.exe to update scripts in our dev and staging environments after they're checked into Fortress.
We use Visual Studio for Database Professionals and TFS to version and manage our database deployments. This allows us to treat our databases just like code (check out, check in, lock, view version history, branch, build, deploy, test, etc.) and even include them in the same solution files if we wish.
Our developers can work on local databases to avoid stepping on each other's changes in a shared environment. When they check database changes into TFS, we have continuous integration to build, test and deploy to our integrated dev environment. We have separate builds on release branches to create differential deployment scripts for each subsequent environment.
Later, if a bug is discovered in a release, we can go to a release branch and hotfix the code and database at the same time.
This is a great product, but its adoption was hindered early on due to a Microsoft marketing blunder. It was originally a separate product under Team System. This meant in order to use features of the developer edition and database edition at the same time, you were required to step up to the much more expensive Team Suite edition. We (and many other customers) gave Microsoft grief about this, and we were very happy they announced this year that DB Pro has been folded into the developer edition, and that immediately anyone licensed with developer edition can install the database edition.
Gus off-handedly mentioned DB Ghost (above) – I second it as a potential solution.
A brief overview of how my company is using DB Ghost:
After the schema for a new DB has been reasonably settled during initial development, we use the DB Ghost 'Data and Schema Scripter' to create script (.sql) files for all the DB objects (and any static data) and we check-in these script files into source control (the tool separates the objects into folders such as 'Stored Procedures', 'Tables', etc.). At this point, we can use either of the DB GHost 'Packager' or 'Packager Plus' tools to create a stand-alone executable to create a new DB from these scripts.
All changes to the DB schema are checked-in to source by check-ins to the specific script files.
At anytime we can use the packager to create an executable to either (a) create a new DB or (b) update an existing DB. Some customization is required for certain path-dependent changes (e.g. changes that require data to be updated), but we have pre-update and post-update scripts that are run.
The 'update' process involves the creation of a clean 'source' DB and then (after pre-update custom scripts), a comparison between the schemas of the source DB and the target DB. DB Ghost updates the target DB to match
We routinely make changes to production DBs (we have 14 customers in 7 different production environments) but inevitably deploy a large-enough set of changes with a DB Ghost update executable (created during our build process). Any production changes that were not checked-in to source (or that were not checked-in to the appropriate branch being released) are LOST. This has forced everyone to check-in changes consistently.
To summarize:
If you enforce a policy that all DB updates be deployed using a DB Ghost update executable, you can 'force' developers to consistently check-in their changes, regardless of whether they are deployed manually in the interim.
Adding a step (or steps) to your build process to create a DB Ghost update executable will in-effect perform a test to verify that a DB can be created from scripts (i.e. because DB Ghost creates a 'source' DB, even when creating the update executable package) and if you add a step (or steps) to execute the update package [on any of the four test DBs you mentioned], you can keep your test DBs in line with source.
There are some caveats and some limitations in what changes are 'easily' deployed with this tool (really a suite of related tools), but they are all fairly minor (at least for my company):
Renaming objects must be done in one of the custom scripts
The entire DB is always updated (e.g. objects in a single schema can't be updated alone) making it difficult to support customer-specific objects in the main application DB
The book Refactoring Databases addresses many of these issues at a conceptual level.
As far as tools go, I know that DB Ghost works well for SQL Server. I have heard that the Data Dude edition of Visual Studio has really been imporved upon in the latest release but I don't have any experience with it.
As far as really pulling off continuous integration style database development, it gets really resource instensive really fast because of the number of database copies you need. It is very doable when the database can fit on a developer workstation but impractical when the database is so large that it needs to be deployed across a grid. To do it you bacically need 1 copy of the database per developer [developers who make DDL changes, not just changes to procs] + 6 common copies. The common copies are as follows:
INT DEV --> Developers "check in" their refactoring to INT DEV for integration testing. When integration testing passes, this database is copied over to DEV.
DEV --> This is the "official" development copy of the database. INT DEV is refreshed regularly with a copy of DEV. Developers working on new refactorings get a fresh copy of the database from DEV.
INT QA --> Same idea as INT DEV except for the QA team. When integration tests pass here, this database is copied over to QA and to DEV*.
QA
INT PROD --> Same idea as INT QA except for production. When integration tests pass here, this database is copied over to PROD, QA*, and DEV*
PROD
*When copying databases across DEV/QA/PROD lines, you will also need to run scripts to update test data relevant to the particular environment (e.g. setting up users in QA that the QA team uses to test but that don't exist in production).
One possible solution is to look into implementing DML auditing on your test databases, then just rolling those audit logs into a script for final testing and live deployment. SQL Server 2008 significantly improves on DML auditing, but even SQL Server 2005 supports it via triggers.
There are a bunch of links in these posts that I'll want to follow up on (I "rolled my own" system years ago, have to see if there are similarities). One thing you will need, and that I hope is mentioned in these links, is discipline. I don't quite see how any automated system can work if anyone can change anything at any time. (Your question implies that this can happen on your production systems, but obviously that can't be true.)
Having one person (the fabled "database administrator") dedicated to the task of managing changes to databases, particularly production databases, is a very common solution. As for maintaining consistency across X development and testing databases: if it/they are used by many users, once again you are best served by having an individual act as a "clearing house" for changes; if everyone has their own database instance, then they're responsible for keeping it in order, and having a central consistent database "source" will be critical when they need a refreshed baseline database.
Here's a recent Stack Overflow post that may be of interest: how-to-refresh-a-test-instance-of-sql-server-with-production-data-without-using
Red Gate has a paper describing how to achieve build automation: http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf
This is built around SQL Source Control, which integrates with SSMS and your existing source control system.
I've written a .NET based tool to handle database versioning in an automated fashion. We have been using this tool in production to handle rolling out database updates (including patches) to multiple environments, keep a log in each database of which scripts have been run, and do it all in an automated fashion. It has a command-line console so you can create batch scripts which use this tool. Check it out: https://github.com/bmontgomery/DatabaseVersioning
For what it's worth, this is a real example of a simple, low cost approach used by my former employer (and which I am trying to impress on my current employer as a basic first step).
Add a table called 'DB_VERSION' or similar. In EVERY upgrade script, add a row to that table which can include as little or as many columns as you see fit to describe the upgrade but at a minimum I would suggest { VERSION, EXECUTION_DATE, DESCRIPTION, EXECUTION_USER }. Now you have a concrete record of what has been going on. If someone runs their own unauthorised script you'd still need to follow the advice of the answers above, but this is just a simple way of dramatically improving on your existing versioning control (i.e. none).
Now let's you have an upgrade script from v2.1 to v2.2 of the database and you want to verify the lone maverick guy has actually run it on his database, you can just search for rows where VERSION = 'v2.2' and if you get a result, don't run this upgrade script. Can be built into a console utility app if necessary.