I just recently posted my first WCF Rest service. It works great so far. Yesterday a request came in to alter something in the database table that houses the entries that come in to us via this service. Since the contract wasn't changing, it was basically a change to the insert within the datalayer, I assumed this change could be implemented easily. So I modified the insert, recompiled the code and republished the site/service.
The next time we received a request, it did not perform my updated insert, but rather the old version from the prior build. I thought perhaps I screwed up something compiling, so I recompiled and posted again. But ended with the same result.
Has anyone seen this happen before? How is this possible? Assuming I must be overlooking something minor.
Turned out to be an issue with the DNS pointing to another server hosting an older, outdated version of the process that was suppose to have been decommissioned.
Related
Imprtant note: this is only happening on clients, not in dev environment
I'm having an odd issue. An application I have written / deployed at my company uses 3 connection strings within it to query data from various sources as needed. Every so often, I get an error report from one of the end users machines stating something like
System.InvalidOperationException
No connection string named 'CM_PS1Context' could be found in the application config file.
So, I went to this machine, opened up the exe.config file, and sure enough, 2 of the connection string entries were just.... gone. I went to my machine, grabbed the entries from my config, dropped them in, everything was working again.
I can't make heads or tails of this, nothing in my code is modifying the app.config (user.config, yes). I am using EntityFramework for all 3 databases. Has anyone seen this, or maybe has an idea of what might be happening?
Note, this is not happening to all clients, it seems very very random at the moment, as I am unable to reproduce the error myself (nor can anyone if they are trying)
For the moment, my fix is going to be something retarded like watching for that error and when it occurs, reinsert the connection string, but... that's a very messy, ugly fix, that I'd prefer NOT to put in production. Any suggestions would be very helpful!
This is not really a question (yet!) but rather sharing something that happened to me last night and the solution was completely different from those found on stackoverflow or google.
After adding some new functionality to an existing application which resulted on a couple of changes on the model I deployed the application to our development environment without an issue. However, when I deployed it to our production environment I started getting this "There is already an object named 'TableName' in the database." error.
Clearly, Entity Framework was trying to (re)create my model from scratch instead of updating it. After trying several solutions including the Global.asax SetInitializer(null), resetting migrations, etc. nothing worked and would only lead to other errors.
At some point I just rolled all of my attempts to fix changes back and started from scratch looking for a solution.
The solution was actually to go into the very first Migrations file (typically called init or Initial) and comment out the code that was trying to create the tables.
Afterwards, I could see that there was another migration trying to dropfield also generating an equally ugly error (something along the lines of "Unable to remove field 'FieldName' because it doesn't exist"), so I had to comment that line as well.
So basically, after commenting out a few migration lines, everything started working and the model did get upgraded to the latest version.
Now, here's the question.
Clearly, Dev and Prd were out of sync DB-wise (which is fine, to my eyes) but this ended up creating migrations that, for some reason, were not compatible with production and this is where I cannot understand how Entity Framework is not able to manage this. I mean, why was EF trying to create a table if it already existed? And why was EF trying to drop a field that was not present in the table schema?
Is this something not covered by EF? Or did something happened at some point that messed up the entire EF set up of my project?
I wonder what you are using for updating a client database when your program is patched?
Let's take a look at this scenario:
You have a desktop application (.net, entity framework) which is using sql server compact database.
You release a new version of your application which is using extended database.
The user downloads a patch with modified files
How do you update the database?
I wonder how you are doing this process. I have some conception but I think more experienced people can give me better and tried solutions or advice.
You need a migration framework.
There are existing OSS libraries like FluentMigrator
project page
wiki
long "Getting started" blogpost
Entity Framework Code First will also get its own migration framework, but it's still in beta:
Code First Migrations: Beta 1 Released
Code First Migrations: Beta 1 ‘No-Magic’ Walkthrough
Code First Migrations: Beta 1 ‘With-Magic’ Walkthrough (Automatic Migrations)
You need to provide explicitly or hidden in your code DB upgrade mechanism, and - thus implement something like DB versioning chain
There are a couple of aspects to it.
First is versioning. You need some way of tying teh version of teeh db to the version of the program, could be something as simple as table with a version number in it. You need to check it on executing the application as well.
One fun scenario is you 'update' application and db successfully, and then for some operational reason the customer restores a previous version of the db, or if you are on a frequent patch cycle, do you have to do each patch in order or can thay catch up. Do you want to deal with application only or database only upgrades differently?
There's no one right way for this, you have to look at what sort of changes you make, and what level of complexity you are prepared to maintain in order to cope with everything that could go wrong.
A couple a of things worth looking at.
Two databases, one for static 'read-only' data, and one for more dynamic stuff. Upgrading the static data, can then simply be a restore from a resource within the upgrade package.
The other is how much can you do with meta-data, stored in db tables. For instance a version based xsd to describe your objects instead of a concrete class. That's goes in your read only db, now you've updated code and application with a restore and possibly some transforms.
Lots of ways to go, just remember
'users' will always find some way of making you look like an eejit, by doing something you never thought they would.
The more complex you make the system, the more chance of the above.
And last but not least, don't take short cuts on data version conversions, if you lose data integrity, everything else you do will be wasted.
Over the last few months I have been developing an application using Entity Framework code first and sql server CE for the first time. I have found the combination of the 2 very useful, and compared to my old way of doing things (particularly ADO.NET) it allows for insanely faster dev times.
However, this morning me and some colleagues came across a problem which we have never seen in any documentation regarding SqlServer CE. It cannot handle more than one insert at once!
I was of the opinion that CE may become my database of choice until I came across this problem. The reason I discovered this was in my application I needed to make multiple requests to a web service at once, and it was introducing a bit of a bottleneck so I proceeded to use a Parallel.Invoke call to make the multiple requests.
This was all working fine untill I turned on my applications message logging service. At this point I began to get the following error when making the web requests:
A duplicate value cannot be inserted into a unique index. [ Table name = Accounts,Constraint name = PK__Accounts__0000000000000016 ]
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.SqlServerCe.SqlCeException: A duplicate value cannot be inserted into a unique index. [ Table name = Accounts,Constraint name = PK__Accounts__0000000000000016 ]
Strange I thought. And my first recation was that it must be something to do with the DbContext, maybe the DbContext I was using was static or something else in my Repository class was static and causing the problem, but after sniffing around I was certaing it was nothing to do with my code.
I then brought it to the attention of my colleagues and after a while it was decided it must be SqlServer CE, and after us all setting up different test projects attempting to recreate the problem using threads it was recreated almost every time, and when using Sql Server Express the problem wasn't ocurring.
I just think it is a bit strange that CE cannot handle something as simple as this. I mean the problem is not only with threading - are you telling me that it cannot be used for a web application where two users may insert into a table at the same time...INSANITY!
Anyway, just wondering if anyone else has come across this late into a project like me and been shocked (and annoyed) that it works this way? Also if anyone could shed light on why it is limited in this way that would be cool.
It looks like a bug in SQL CE. See http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=641518
We have a huge ASP.NET web application which needs to be deployed to LIVE with zero or nearly zero downtime. Let me point out that I've read the following question/answers but unfortunately it doesn't solve our problems as our architecture is a little bit more complicated.
Let's say that currently we have two IIS servers responding to requests and both are connected to the same MSSQL server. The solution seems like a piece of cake but it isn't because of the major schema changes we have to apply from time to time. Because of it's huge size, a simple database backup takes around 8 minutes which has become unacceptable, but it is a must before every new deploy for security reasons.
I would like to ask your help to get this deployment time down as much as possible. If you have any great ideas for a different architecture or maybe you've used tools which can help us here then please do not be shy and share the info.
Currently the best idea we came up is buying another SQL server which would be set up as a replica of the original DB. From the load balancer we would route all new traffic to one of the two IIS webservers. When the second webserver is free of running sessions then we can make deploy the new code. Now comes the hard part. At this point we would go offline with the website, take down the replication between the two SQL servers so we directly have a snapshot of the database in a hopefully consistent state (saves us 7.5 of the 8 minutes). Finally we would update the database schema on the main SQL server, and route all traffic via the updated webserver while we are upgrading the second webserver to the new version.
Please also share your thoughts regarding this solution. Can we somehow manage to eliminate the need for going offline with the website? How do bluechip companies with mammuth web applications do deployment?
Every idea or suggestion is more than welcome! Buying new hardware or software is really not a problem - we just miss the breaking idea. Thanks in advance for your help!
Edit 1 (2010.01.12):
Another requirement is to eliminate manual intervention, so in fact we are looking for a way which can be applied in an automated way.
Let me just remind you the requirement list:
1. Backup of database
2a. Deploy of website
2b. Update of database schema
3. Change to updated website
4 (optional): easy way of reverting to the old website if something goes very wrong.
First off, you are likely unaware of the "point in time restore" concept. The long and short of it is that if you're properly backing up your transaction logs, it doesn't matter how long your backups take -- you always have the ability to restore back to any point in time. You just restore your last backup and reapply the transaction logs since then, and you can get a restore right up to the point of deployment.
What I would tend to recommend would be reinstalling the website on a different Web Site definition with a "dead" host header configured -- this is your staging site. Make a script which runs your db changes all at once (in a transaction) and then flips the host headers between the live site and the staging site.
Environment:
Current live web site(s)
Current live database
New version of web site(s)
New version of database
Approach:
Setup a feed (e.g. replication, a stored procedure etc.) so that the current live database server is sending data updates to the new version of the database.
Change your router so that the new requests get pointed to the new version of the website until the old sites are no longer serving requests.
Take down the old site and database.
In this approach there is zero downtime because both the old site and the new site (and their respective databases) are permitted to serve requests side-by-side. The only problem scenario is clients who have one request go to the new server and a subsequent request go to the old server. In that scenario, they will not see the new data that might have been created on the new site. A solution to that is to configure your router to temporarily use sticky sessions and ensure that new sessions all go to the new web server.
One possibility would be to use versioning in your database.
So you have a global setting which defines the current version of all stored procedures to use.
When you come to do a release you do the following:
1. Change database schema, ensuring no stored procedures of the previous
version are broken.
2. Release the next version of stored procedures
3. Change the global setting, which switches the application to use the
next set of stored procedures/new schema.
The tricky portion is ensuring you don't break anything when you change the database schema.
If you need to make fundamental changes, you'll need to either use 'temporary' tables, which are used for one version, before moving to the schema you want in the next version, or you can modify the previous versions stored procedures to be more flexible.
That should mean almost zero downtime, if you can get it right.
Firstly - do regular, small changes - I've worked as a freelance developer in several major Investment Banks on various 24/7 live trading systems and the best, smoothest deployment model I ever saw was regular (monthly) deployments with a well defined rollback strategy each time.
In this way, all changes are kept to a minimum, bugs get fixed in a timely manner, development doesn't feature creep, and because it's happening so often, EVERYONE is motivated to get the deployment process as automatic and hiccup free as possible.
But inevitably, big schema changes come along that make a rollback very difficult (although it's still important to know - and test - how you'll rollback in case you have to).
For these big schema changes we worked a model of 'bridging the gap'. That is to say that we would implement a database transformation layer which would run in near real-time, updating a live copy of the new style schema data in a second database, based on the live data in the currently deployed system.
We would copy this a couple of times a day to a UAT system and use it as the basis for testing (hence testers always have a realistic dataset to test, and the transformation layer is being tested as part of that).
So the change in database is continuously running live, and the deployment of the new system then is simply a case of:
Freeze everyone out
Switching off the transformation layer
Turning on the new application layer
Switching users over to new application layer
Unfreeze everything
This is where rollback becomes something of an issue though. If the new system has run for an hour, rolling back to the old system is not easy. A reverse transformation layer would be the ideal but I don't think we ever got anyone to buy into the idea of spending the time on it.
In the end we'd deploy during the quietest period possible and get everyone to agree that rollback would take us to the point of switchover and anything missing would have to be manually re-keyed. Mind you - that motivates people to test stuff properly :)
Finally - how to do the transformation layer - in some of the simpler cases we used triggers in the database itself. Only once I think we grafted code into a previous release that did 'double updates', the original update to the current system, and another update to the new style schema. The intention was to release the new system at the next release, but testing revealed the need for tweaks to the database and the 'transformation layer' was in production at that point, so that process got messy.
The model we used most often for the transformation layer was simply another server process running, watching the database and updating the new database based on any changes. This worked well as that code is running outside of production, can be changed at will without affecting the production system (well - if you run on a replication of the production database you can, but otherwise you have to watch out for not tying the production database up with some suicidal queries - just put the best most conscientious guys on this part of the code!)
Anyway - sorry for the long ramble - hope I put the idea over - continuously do your database deployment as a 'live, running' deployment to a second database, then all you've got to do to deploy the new system is deploy the application layer and pipe everything to it.
I saw this post a while ago, but have never used it, so can't vouch for ease of use/suitability, but MS have a free web farm deployment framework that may suit you:
http://weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx
See my answer here: How to deploy an ASP.NET Application with zero downtime
My approach is to use a combination of polling AppDomains and a named mutex to create an atomic deployment agent.
I would reccomend using Analysis Services instead of the database engine for your reporting needs. Then you could process your cubes.. move your database.. change a connection string, reprocess your cubes and thus-- have zero downtime.
Dead serious... There isn't a better product in the world than Analysis Services for this type of thing.
As you say you don't have problem buying new server's, I suggest the best way would be to get a new server deploy you application there first. Follow below steps:
1. Add any certificates if required to this new server and Test your application with new settings.
2. Shutdown your old server and assign it's IP to the new Server, the downtime would be the same as much your server takes to shutdown and you assigning the new IP to the new Server.
3. If you see the new Deploy is not working you can always revert back by following the step 2 again.
Regarding your database backup you would have to set a backup schedule.
I just answered a similar question here: Deploy ASP.NET web site and Update MSSQL database with zero downtime
It discusses how to update the database and IIS website during a deployment with zero downtime, mainly by ensuring your database is always backwards compatible (but just to the last application release).