How to develop in MVC (C#) with existing database - c#

I'm building an application that will use a large database that's currently hosted on Azure SQL. I also want to use ASP.net Identity. Additionally, my local machine cannot connect to the Azure SQL database due to security restrictions (I can't remove these, they are corporate IT policies).
When developing, do either of the following make sense? Or is there another option that I'm unaware of?
Add the fields from the large database, and maybe a few rows of sample data, to my localdb that's being used by default by Visual Studio? If I do this, how do I migrate over to the existing Azure database when it's time to go live?
Host the development application on Azure. This wouldn't be ideal, given that I'd need to upload the application with every change.

You could do that for small scale testing and demonstration purposes yes. Essentially to interact with the database in ASP you create an instance of the database with the reference link to the local one. Providing they are identical, you could simply just change the link to the company database when it’s time to go live. You should be careful however as working with relatively small datasets means everything will run relatively smoothly and quickly but if your coding is sloppy, it could slow the entire thing down with big data sets.
As for developing, I would personally develop on a small scale yourself locally until you’re happy with the result. However, before you do a full scale launch, I would do a pilot launch in a small section to highlight any bugs you may have pushed and halt this on azure. Then after you’ve ruled out the obvious bugs, you’ve got a much safer launch.

to work in an develop-release separated environment:
you need a intranet copy of remote database first, then use code first approach to continue working,
reverse your database to code-first:
https://learn.microsoft.com/en-us/ef/core/get-started/aspnetcore/existing-db
https://cmatskas.com/scaffolding-dbcontext-and-models-with-entityframework-core-2-0-and-the-cli/
https://wildermuth.com/2017/12/20/Reverse-Engineering-Existing-Databases-in-Entity-Framework-Core-2
Enable database migrate: https://msdn.microsoft.com/en-us/library/dn579398(v=vs.113).aspx
Add identity framework to intranet database with code first: https://learn.microsoft.com/en-us/aspnet/identity/overview/getting-started/adding-aspnet-identity-to-an-empty-or-existing-web-forms-project
carefully maintenance migration code in later tasks, remote database will be auto-updated after your code is released

Related

Easy way to convert the MS Access database to Web application

As per requirement, we need to convert the existing MS Access database to a web application. Is there any easy way to convert the MS Access database to web application? As of now they are inserting the data to access db using access Forms. User also wish to continue access form feature even if we create new web application for the same. That means user should have the option to access the MS access database through Access forms as well as web application.
Please guide me away to solve this issue.
Best Regards,
Ranish
You can use Office 365 and have somewhat of a web-based application.
https://blogs.office.com/en-us/2012/07/30/get-started-with-access-2013-web-apps/
Or, store Access in SharePoint, but your functionality will be quite limited. Keep in mind, no VBA will run on a web-based application.
The alternative is to use SQL Server Express, and ASP.NET, both of which are free from Microsoft. I'll tell you now, though, the learning curve will be quite steep if you have never used these technologies before. This combo, however, will give you the most control!
You can get the .NET framework from here.
https://www.microsoft.com/en-us/download/details.aspx?id=30653
You can get SQL Server Express from here.
https://www.microsoft.com/en-US/download/details.aspx?id=42299
Four years after and according to this:
https://www.comparitech.com/net-admin/microsoft-access/
still a question for many. Access can be converted to an Web App in almost no time. Particularly Access Forms are super easy to crate with the library like Jam.py.
The process was discussed on Reddit in April 2021:
https://www.reddit.com/r/MSAccess/comments/mj4aya/moving_ms_access_to_web/
I see quite a few Access databases with more than 100 tables, all converted successfully to SQLite3. After inspecting the imported tables via provided link, Forms are automatically created. Which leaves the Access Reports and Business Logic untouched. Reports can be designed in LibreOffice for Jam.py as Templates. Business Logic can be moved from VB to Python, if there is a need to do so.
The SQLite was selected as the default DB for the conversion, since it is very portable. Looks like the converted App can be moved to any DB that Jam.py supports, by Export/Import.
Cheers
First of all, Database and Web Application are not mutually exclusive.
Back to original question, I have done multiple projects like that. A client started with small Microsoft Access database with a couple of user; then they migrate to to web application when they get more traffic.
At first, you want to convert data from MS Access Database to SQL Server. MS Access Database is not meant to access multiple users simultaneously. Then you develop the Web Application which uses SQL server as back end database.
Right before you go live, you convert the data again from MS Access Database to SQL Server very last time. Then do not let them use old MS Access Database anymore.
Easy way to convert the MS Access database to Web application
Most of the time whoever created MS Access database are not software engineer, so table are not normalized and do not have relationship at all. I normally create new normalized database in SQL Server. Then write a small program to convert those data from MS Access to SQL database.
There are generally two approaches with more details covered in this article looking at ways to convert microsoft access to web application
Direct Port means simply a basic migration whereby you port more or less verbatim basic Access forms into a web portal i.e. Microsft Access to a browser-based version as is using a third-party tool. Some of these are quite mundane in that it just allows you to run the Access application inside an internet browser (whoopee!) or can be quite drawn out and then limits you on how much you can change afterward. With even more complex cases requiring a consultant to help you migrate the system. Though it does help to know your user count as the higher you tend to be, the less appealing a third-party porting service becomes due to subscription-based models.
Upsize -the more involved or complex your data structure is an upsize using custom development and splitting the system across web and data tiers might be worth it if
You've got a special process or some secret sauce you're looking to keep.
Likely going to have a significant user count and want to avoid subscription
Inherently cynical or cautious, and want to handle your own architecture and security
Looking for a specific user experience
If you mean how to convert automatically and you want to keep both Access and the Web application (I don't recommend that, I would move everything to the Web app) I would do the following:
Export your Access data in CSV/Excel
Use a platform like DaDaBIK to import the CSV/Excel file and automatically create a Web app based on that file, with data stored on SQL server, MySQL, PostgreSQL or SQLite.
connect your Access to the SQL Server (or Mysql, ...) database created by DaDaBIK, from now on Access will only be used as a frontend.
Now you have a web app created with DaDaBIK and your Access frontend both working on the same DB. As I said I would skip 3) and keep only the Web app, this helps with handling data integrity when two users are accessing the same record.
Depending on how complex is your Access Application (e.g. complex validation rules or custom VB code you added), you could reach your goal without any coding or with some coding.

Suggestions on on-going development of database schema when it's under replication

I'm currently working on a database that comes with a legacy project which uses EntityFramework (updates code based on existing database using Data Model Designer)
Currently I work on the master copy and our developers work locally using SQL Server merge-replications on their local PC.
Issue here is that we recently started doing some change work that modifies the database schema, so when we use schema comparison (visual studio SQL compare feature), there are huge number of replication sp & schema changes that basically if I do update it will corrupt the live database. So my current solution is remove the dev server replication (so that the schema goes back to what it should look like without replication changes), then do the schema compare & update, and then create a new merge replication again so our developers can continue working on the dev db.
I thought it was just one-off db schema change, but just realized it will be continuous changes at least for the next 3-6 months, so that basically make each release a big headache (if it can be called as a 'release' prep...)
My SQL & EntityFramework knowledge is limited, can anyone shed some light on this for me please?
Thanks in advance!
Whats the observed need behind merge replication in the dev environment? I understand the need for devs to have a local copy they can mess with, run tests against etc, but I'm lost on why a full Publisher-Subscriber model is needed to synchronize DB state in a dev/test environment, and it seems to be causing you more problems than it may solve given the schema is going to be malleable for a few months.
If merge replication is not a hard requirement for the dev environment, I would suggest you replace it with an alternate method of distributing changes to the local copies. If the devs are working with a full copy of the DB anyway, I see no reason not to write a script that backs up the master copy on the dev server, then pulls that file down and restores it locally. Then, changes to that schema would be accomplished with change scripts, which can be run and tested locally before being applied to the master DB, then distributed on-demand with another run of the backup/restore script.
It's a slightly more manual process and an older way to work with DBs, but it seems far more palatable to me than breaking and re-establishing replication regularly. It'll require some collaboration to make sure devs aren't trying to make a backup at the same time or making conflicting changes to local copies that will blow up on the master copy; your devs ideally should be talking to each other anyway about this kind of thing, and you might make the script smart enough to look for a recent backup before generating another.
One more thought, don't know how feasible it is given your progress to date; it's not impossible to switch from DB-First to Code-First. The conversion is basically a hybrid process of Database First and Code First; the DB is reverse-engineered as a one-time operation to generate a model similar to DB First, but instead of EDMX files, the model is written out to source code files, and changes to those model files or to mapping conventions on the context can then be aggregated and applied to the schema as migrations in typical Code First style. Assuming you prepare the live DB for migrations as well (and have the live DB in the same state as the master Dev DB prior to the model generation), this even removes the requirement of a SQL compare and update; you just apply the migrations to the live DB, same as you would to any Dev instance. The only potential gotcha is that some migrations can be written destructively, so you have to make sure what you're about to apply isn't going to clear out all the fields in a renamed column.

Database Deployment Practices

I have deployed plenty of software to my clients. Mostly are Window Forms applications.
Here is my current practice.
Manually install SQLExpress and SQL Management Studio to each client PC.
Then use ClickOne to install the code from the server.
When there is a changes in code, I will use ClickOne to deploy -(NO PROBLEM with this step)
But when there is a change in a database column, what do I do?
I have even tried writing a database update script. Each time the program starts, it will read through the .sql update file and run them if the database exists. This solves the problem of updating the database columns, but it does not help in my DEBUGGING work when my customer complain there is a wrong data. At that point, I have to personally go to their site to check it out.
I find it difficult to have the database installed on the client PC as it make my debugging work very very difficult. I am thinking about moving my client database to a host on an Online server. But that then comes with these constraints:
What if the internet is down?
What if my customer has no internet?
Could you help to advise me? Is this a common problem faced by developer? What is the common practice out there? Does Window Azure or SQL CE help?
Depending on the data I would recommend using SQL CE.
If the data isn't too much, speed is not the primary goal (CE is slower than Express) and you don't need DB-Features not supported by CE (e.g. stored procedures) it is the better choice IMHO, because:
The client does not need to install a full SQL server (easier installation/deployment)
You do not have problems with multiple SQLExpress instances
Your SW doesn't need to worry if there even is a SQL instance
Less resources used on the client side
Additionally the clients could send you their SQL CE DB-File for inspection and you do not need to go to their site.
It is also relativly easy to implement an off site sync with SQL CE and MS Sync FW.
Installing one database per client PC can be tricky. I think you have a decent handle on how to deal with the issue currently. It seems like the real issue you are currently facing is debugging. To deal with this, there are a couple ways you could go:
Have the customer upload their copy of the database back to you. This would provide you with the data they have and you could use it with a debug copy of your code to identify the issues. The downside is that if the database is large it might be an issue transferring it.
Remote onto the customer's machine. Observe the system remotely using something like CoPilot. That way you could see what is happening in its natural environment.
There are probably other ways, but these are a couple of good ones. As for using an online database, this is an option but it brings its own set of issues with it. You mentioned a couple. As for Azure, that is cloud-based (online) so the same issues will apply. SQL CE won't help you any more than your current installation does.
Bottom line is that I would recommend you look into the ways to fix your one issue (as listed above) instead of creating a whole new set of issues by moving to an Internet-based solution. I would only recommend moving to the Internet if it was addressing a larger business need (for example, mobility). Doing the same thing you have been doing only online will probably just make life harder.
To recap the comments below since they are so pertinent to the issue, if you are choosing between file-based databases that don't need to be physically installed on the machine, your best choices are probably between SQLite and SQL CE. Microsoft supports SQL CE better but it is a larger package and has less features than the trim SQLite. Here is a good discussion on the differences:
https://stackoverflow.com/questions/2278104/sql-ce-sqlite-what-are-the-differences-between-them
However, the issue gets more complicated when you start looking at linq2sql since that is designed for SQL server. Microsoft does not support SQL CE with linq2sql out of the box, although there is a work-around that will get it to work:
http://pietschsoft.com/post/2009/01/Using-LINQ-to-SQL-with-SQL-Server-Compact-Edition.aspx
SQLite is not supported at all with linq2sql but there is a way to use linq to talk with SQLite:
LINQ with SQLite (linqtosql)
This library also supports other common databases including MySQL and Firebird.
You could use the SQLCMD utility to execute the change script, as mentioned in this related question

Is it possible to deploy an enterprise ASP.NET application and SQL schema changes with zero downtime?

We have a huge ASP.NET web application which needs to be deployed to LIVE with zero or nearly zero downtime. Let me point out that I've read the following question/answers but unfortunately it doesn't solve our problems as our architecture is a little bit more complicated.
Let's say that currently we have two IIS servers responding to requests and both are connected to the same MSSQL server. The solution seems like a piece of cake but it isn't because of the major schema changes we have to apply from time to time. Because of it's huge size, a simple database backup takes around 8 minutes which has become unacceptable, but it is a must before every new deploy for security reasons.
I would like to ask your help to get this deployment time down as much as possible. If you have any great ideas for a different architecture or maybe you've used tools which can help us here then please do not be shy and share the info.
Currently the best idea we came up is buying another SQL server which would be set up as a replica of the original DB. From the load balancer we would route all new traffic to one of the two IIS webservers. When the second webserver is free of running sessions then we can make deploy the new code. Now comes the hard part. At this point we would go offline with the website, take down the replication between the two SQL servers so we directly have a snapshot of the database in a hopefully consistent state (saves us 7.5 of the 8 minutes). Finally we would update the database schema on the main SQL server, and route all traffic via the updated webserver while we are upgrading the second webserver to the new version.
Please also share your thoughts regarding this solution. Can we somehow manage to eliminate the need for going offline with the website? How do bluechip companies with mammuth web applications do deployment?
Every idea or suggestion is more than welcome! Buying new hardware or software is really not a problem - we just miss the breaking idea. Thanks in advance for your help!
Edit 1 (2010.01.12):
Another requirement is to eliminate manual intervention, so in fact we are looking for a way which can be applied in an automated way.
Let me just remind you the requirement list:
1. Backup of database
2a. Deploy of website
2b. Update of database schema
3. Change to updated website
4 (optional): easy way of reverting to the old website if something goes very wrong.
First off, you are likely unaware of the "point in time restore" concept. The long and short of it is that if you're properly backing up your transaction logs, it doesn't matter how long your backups take -- you always have the ability to restore back to any point in time. You just restore your last backup and reapply the transaction logs since then, and you can get a restore right up to the point of deployment.
What I would tend to recommend would be reinstalling the website on a different Web Site definition with a "dead" host header configured -- this is your staging site. Make a script which runs your db changes all at once (in a transaction) and then flips the host headers between the live site and the staging site.
Environment:
Current live web site(s)
Current live database
New version of web site(s)
New version of database
Approach:
Setup a feed (e.g. replication, a stored procedure etc.) so that the current live database server is sending data updates to the new version of the database.
Change your router so that the new requests get pointed to the new version of the website until the old sites are no longer serving requests.
Take down the old site and database.
In this approach there is zero downtime because both the old site and the new site (and their respective databases) are permitted to serve requests side-by-side. The only problem scenario is clients who have one request go to the new server and a subsequent request go to the old server. In that scenario, they will not see the new data that might have been created on the new site. A solution to that is to configure your router to temporarily use sticky sessions and ensure that new sessions all go to the new web server.
One possibility would be to use versioning in your database.
So you have a global setting which defines the current version of all stored procedures to use.
When you come to do a release you do the following:
1. Change database schema, ensuring no stored procedures of the previous
version are broken.
2. Release the next version of stored procedures
3. Change the global setting, which switches the application to use the
next set of stored procedures/new schema.
The tricky portion is ensuring you don't break anything when you change the database schema.
If you need to make fundamental changes, you'll need to either use 'temporary' tables, which are used for one version, before moving to the schema you want in the next version, or you can modify the previous versions stored procedures to be more flexible.
That should mean almost zero downtime, if you can get it right.
Firstly - do regular, small changes - I've worked as a freelance developer in several major Investment Banks on various 24/7 live trading systems and the best, smoothest deployment model I ever saw was regular (monthly) deployments with a well defined rollback strategy each time.
In this way, all changes are kept to a minimum, bugs get fixed in a timely manner, development doesn't feature creep, and because it's happening so often, EVERYONE is motivated to get the deployment process as automatic and hiccup free as possible.
But inevitably, big schema changes come along that make a rollback very difficult (although it's still important to know - and test - how you'll rollback in case you have to).
For these big schema changes we worked a model of 'bridging the gap'. That is to say that we would implement a database transformation layer which would run in near real-time, updating a live copy of the new style schema data in a second database, based on the live data in the currently deployed system.
We would copy this a couple of times a day to a UAT system and use it as the basis for testing (hence testers always have a realistic dataset to test, and the transformation layer is being tested as part of that).
So the change in database is continuously running live, and the deployment of the new system then is simply a case of:
Freeze everyone out
Switching off the transformation layer
Turning on the new application layer
Switching users over to new application layer
Unfreeze everything
This is where rollback becomes something of an issue though. If the new system has run for an hour, rolling back to the old system is not easy. A reverse transformation layer would be the ideal but I don't think we ever got anyone to buy into the idea of spending the time on it.
In the end we'd deploy during the quietest period possible and get everyone to agree that rollback would take us to the point of switchover and anything missing would have to be manually re-keyed. Mind you - that motivates people to test stuff properly :)
Finally - how to do the transformation layer - in some of the simpler cases we used triggers in the database itself. Only once I think we grafted code into a previous release that did 'double updates', the original update to the current system, and another update to the new style schema. The intention was to release the new system at the next release, but testing revealed the need for tweaks to the database and the 'transformation layer' was in production at that point, so that process got messy.
The model we used most often for the transformation layer was simply another server process running, watching the database and updating the new database based on any changes. This worked well as that code is running outside of production, can be changed at will without affecting the production system (well - if you run on a replication of the production database you can, but otherwise you have to watch out for not tying the production database up with some suicidal queries - just put the best most conscientious guys on this part of the code!)
Anyway - sorry for the long ramble - hope I put the idea over - continuously do your database deployment as a 'live, running' deployment to a second database, then all you've got to do to deploy the new system is deploy the application layer and pipe everything to it.
I saw this post a while ago, but have never used it, so can't vouch for ease of use/suitability, but MS have a free web farm deployment framework that may suit you:
http://weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx
See my answer here: How to deploy an ASP.NET Application with zero downtime
My approach is to use a combination of polling AppDomains and a named mutex to create an atomic deployment agent.
I would reccomend using Analysis Services instead of the database engine for your reporting needs. Then you could process your cubes.. move your database.. change a connection string, reprocess your cubes and thus-- have zero downtime.
Dead serious... There isn't a better product in the world than Analysis Services for this type of thing.
As you say you don't have problem buying new server's, I suggest the best way would be to get a new server deploy you application there first. Follow below steps:
1. Add any certificates if required to this new server and Test your application with new settings.
2. Shutdown your old server and assign it's IP to the new Server, the downtime would be the same as much your server takes to shutdown and you assigning the new IP to the new Server.
3. If you see the new Deploy is not working you can always revert back by following the step 2 again.
Regarding your database backup you would have to set a backup schedule.
I just answered a similar question here: Deploy ASP.NET web site and Update MSSQL database with zero downtime
It discusses how to update the database and IIS website during a deployment with zero downtime, mainly by ensuring your database is always backwards compatible (but just to the last application release).

Best means to store data locally when offline

I am in the midst of writing a small program (more to experiment with vs 2010 than anything else)
Despite being an experiment it has some practical use for our local athletics club.
My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website).
Take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table)
Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc.
So I was thinking I'd do something like write xml files locally with the data, including a field to indicate changes etc?
If anyone can point me in a direction I would appreciate it...hell if anyone could tell me if this has a name, I'd appreciate it.
Essentially what you need is, in addition to your remote data store, a local data store on your desktop. You could then write your code by hand to sync the data stores when you go offline / online, or you could use the Microsoft Sync framework to handle it for you.
I've personally used the Sync framework on a number of projects and once you get used to the conventions, it's pretty easy to use.
If a local storage format is what your after. SQLite is one option. You can copy your tables from the server to your local SQLite db.
You could also save your data to files, but XML is a horrible format for doing this. You'll probably want to use YAML or JSON instead.
You may want to take a look at SQL Server Compact -- it provides some decent capabilities with synchronizing back with the mothership SQL server.
If you're using MS SQL Server for production, and you only need to work offline on your personal computer, you could install MS SQL Server Express locally. The advantage here over using a different local datastore is that you can reuse your schema, stored procedures, etc. essentially only needing to change the connection string to your application (which you could run locally too through Visual Studio). You would have to write code to manually sync your online and offline db instances, but since it's a small application, it may be reasonable to just copy the entire database from production to local and then from local to production when you get home (assuming you're the only one updating the db, and wouldn't be potentially wiping out any new records entered in production while you were at the event).
Google Gears http://gears.google.com/ is intended if your app is a web app (which I didn't quite get what it is from your description)

Categories

Resources