We are using EF Core with database which constantly changes by database administrators. So we can't always check changes in this database. What is the best approach to use EF Core with such database?
We tried to use Database First approach but we can't always regenerate model classes in project.
Dit down with management and talk about it. I see no way to work when the db constantly changes.
The way we do it here now is that special people are responsibel for:
Generating changescripts that automatically are applicable
Making sure the EF model is kept in sync.
When neeed we have the ability to give every developer his own environment AND have environments for valiation.
There is NO Way to do any work when you work against a database that 3rd parties are constantly changing, and it gets worse if you are more people.
One way you can track changes in your database is by doing source control for it. At least, then you know what changed. Still not the best scenario overall when someone else modifies database with poor communication.
Related
I have two applications accessing a MySQL Database using EF6. I use one to input new data, and the other to display that data. However, I need to be notified upon changes to the database. I was duplicating the database then grabbing a new copy each time and comparing differences, however this is extremely inefficient.
Is there a way to monitor changes with EF6?
No, afaik theres no way to get notified by the database via entity framework about any changes.
Usually you have to implement some communication between your two applications. There are several possibilities for that, the two to first come into my mind are:
use some additional table in the DB where the first applications makes some "notes" when it changes data. These "notes" may contain a timestamp, so the second application can quickly decide, if there's anythin "new" in the database (the "notes" may contain additional information about what exactly has been added/changed)
You could also implement some direct communication between the two applications using WCF/sockets/IPC whatever. Depending on your scenario, this might be more performant (and changes might been detected quicker), but this is usually more difficult to implement.
This might seem like an odd question, but it's been bugging me for a while now. Given that i'm not a hugely experienced programmer, and i'm the sole application/c# developer in the company, I felt the need to sanity check this with you guys.
We have created an application that handles shipping information internally within our company, this application works with a central DB at our IT office.
We've recently switch DB from mysql to mssql and during the transition we decided to forgo the webservices previously used and connect directly to the DB using Application Role, for added security we only allow access to Store Procedures and all CRUD operations are handle via these.
However we currently have stored procedures for updating every field in one of our objects, which is quite a few stored procedures, and as such quite a bit of work on the client for the DataRepository (needing separate code to call the procedure and pass the right params for each procedure).
So i'm thinking, would it be better to simply update the entire object (in this case, an object represents a table, for example shipments) given that a lot of that data would be change one field at a time after initial insert, and that we are trying to keep the network usage down, as some of the clients will run with limited internet.
Whats the standard practice for this kind of thing? or is there a method that I've overlooked?
I would say that updating all the columns for the entire row is a much more common practice.
If you have a proc for each field, and you change multiple fields in one update, you will have to wrap all the stored procedure calls into a single transaction to avoid the database getting into an inconsistent state. You also have to detect which field changed (which means you need to compare the old row to the new row).
Look into using an Object Relational Mapper (ORM) like Entity Framework for these kinds of operations. You will find that there is not general consensus on whether ORMs are a great solution for all data access needs, but it's hard to argue that they solve the problem of CRUD pretty comprehensively.
Connecting directly to the DB over the internet isn't something I'd switch to in a hurry.
"we decided to forgo the webservices previously used and connect directly to the DB"
What made you decide this?
If you are intent on this model, then a single SPROC to update an entire row would be advantageous over one per column. I have a similar application which uses SPROCs in this way, however the data from the client comes in via XML, then a middleware application on our server end deals with updating the DB.
The standard practice is not to connect to DB over the internet.
Even for small app, this should be the overall model:
Client app -> over internet -> server-side app (WCF WebService) -> LAN/localhost -> SQL
DB
Benefits:
your client app would not even know that you have switched DB implementations.
It would not know anything about DB security, etc.
you, as a programmer, would not be thinking in terms of "rows" and "columns" on client side. Those would be objects and fields.
you would be able to use different protocols: send only single field updates between client app and server app, but update entire rows between server app and DB.
Now, given your situation, updating entire row (the entire object) is definitely more of a standard practice than updating a single column.
It's better to only update what you change if you know what you change (if using an ORM like entity Framework for example), but if you're going down the stored proc route then yes definately update everything in a row at once that's way granular enough.
You should take the switch as an oportunity to change over to LINQ to entities however if you're already in a big change and ditch stored procedures in the process whenever possible
We want to progress towards being able to do continuous delivery of of our application into production. We currently deploy to azure and use table/blob storage and have a azure sql database, which we access with the entity.
As the database schema changes we want to be able to automatically apply the schema changes to the production database, but as this will happen whilst the application is live and the code changes are being deployed to many nodes at the same time we are not sure what the correct approach is.
After some reading it seems (and this makes sense) that the application needs to be tolerant of the 2 different database schema versions, so that it doesn't matter if its an old version of the code or a new version of the code which sees the database, however I'm not sure what the best way to approach handling this in the application is, using the entity framework.
Should we have versioned instances of the EF generated classes in the code which know how to access a specific version of the schema? What happens when the schema is updated and an old version of the code is running against the database?
Our entity framework classes are mapped to views in specific schemas in the db and nothing is mapped to the underlying tables, so potentially this could allow us to create v1 views which the old code uses and v2 views which the new code uses, but maintaining this feels like it would be a bit of a nightmare (its already enough of a pain simply maintaining the EF mappings to views rather than tables)
So what are best practices in this area? What do others do to solve this problem?
Whether you use EF or not, maintaining the code's ability to work with 2 consecutive versions of the database is a good (and perhaps the only viable) approach here.
Here are some ways we handle specific types of migrations:
When adding a column, we can typically just add the column (with a default constraint if non-nullable) and not worry about the code. EF will never issue a "SELECT *", so it will be able to continue to function properly while ignoring the new column. Similarly, adding a table is easy.
When removing a column or table, simply keep that column around 1 version longer than you would have otherwise.
For more complex migrations (e. g. completely changing the structure for a table or segment of the data model), deploy the new model alongside backwards-compatibility views (or tables with triggers to keep them in-sync), which will live as long as does the code that references them. As you say, this can a lot of work depending on the complexity of the migration, but it sounds like you are already well-positioned to do this because your EF entities point to views anyway. On the other hand, the benefit of this work is that you have more time to do the code migration. If you have a large codebase, this could be really beneficial in allowing you to migrate the data model to fit the needs of new features while still supporting old features without major code changes.
As a side-note, the difficulty of data migration often makes us push developing a finalized data model as far back as possible in the development schedule. With EF, you can write and test a lot of code before the data model is finalized (we use code-first to generate a sample SQLExpress database in a unit tests, even though our production database is not maintained by code-first). That way, we make fewer incremental changes to the production data model once a new feature is released.
We have a system built using Entity Framework 5 for creating, editing and deleting data but the problem we have is that sometimes EF is too slow or it simply isn't possible to use entity framework (Views which build data for tables based on users participating in certain groups in database, etc) and we are having to use a stored procedure to update the data.
However we have gotten ourselves into a problem where we are having to save the changes to EF in order to have the data in the database and then call the stored procedures, we can't use ITransactionScope as it always elevates to a distributed transaction and/or locks the table(s) for selects during the transaction.
We are also trying to introduce a DomainEvents pattern which will queue events and raise them after the save changes so we have the data we need in the DB but then we may end up with the first part succeeding and the second part failing.
Are there any good ways to handle this or do we need to move away from EF entirely for this scenario?
I had similar scenario . Later I break the process into small ones and use EF only, and make each small process short. Even overall time is longer, but system is easier to maintain and scale. Also I minimized joins, only update entity itself, disable EF'S AutoDetectChangesEnabled and ValidateOnSaveEnabled.
Sometimes if you look your problem in different ways, you may have better solution.
Good luck!
I wonder what you are using for updating a client database when your program is patched?
Let's take a look at this scenario:
You have a desktop application (.net, entity framework) which is using sql server compact database.
You release a new version of your application which is using extended database.
The user downloads a patch with modified files
How do you update the database?
I wonder how you are doing this process. I have some conception but I think more experienced people can give me better and tried solutions or advice.
You need a migration framework.
There are existing OSS libraries like FluentMigrator
project page
wiki
long "Getting started" blogpost
Entity Framework Code First will also get its own migration framework, but it's still in beta:
Code First Migrations: Beta 1 Released
Code First Migrations: Beta 1 ‘No-Magic’ Walkthrough
Code First Migrations: Beta 1 ‘With-Magic’ Walkthrough (Automatic Migrations)
You need to provide explicitly or hidden in your code DB upgrade mechanism, and - thus implement something like DB versioning chain
There are a couple of aspects to it.
First is versioning. You need some way of tying teh version of teeh db to the version of the program, could be something as simple as table with a version number in it. You need to check it on executing the application as well.
One fun scenario is you 'update' application and db successfully, and then for some operational reason the customer restores a previous version of the db, or if you are on a frequent patch cycle, do you have to do each patch in order or can thay catch up. Do you want to deal with application only or database only upgrades differently?
There's no one right way for this, you have to look at what sort of changes you make, and what level of complexity you are prepared to maintain in order to cope with everything that could go wrong.
A couple a of things worth looking at.
Two databases, one for static 'read-only' data, and one for more dynamic stuff. Upgrading the static data, can then simply be a restore from a resource within the upgrade package.
The other is how much can you do with meta-data, stored in db tables. For instance a version based xsd to describe your objects instead of a concrete class. That's goes in your read only db, now you've updated code and application with a restore and possibly some transforms.
Lots of ways to go, just remember
'users' will always find some way of making you look like an eejit, by doing something you never thought they would.
The more complex you make the system, the more chance of the above.
And last but not least, don't take short cuts on data version conversions, if you lose data integrity, everything else you do will be wasted.