Entity Framework and Database Triggers - c#

I have a database which is created in a separate project and a .edmx model file is generated by Entity Framework and created the model classes from the existing database.
There are several things that are added to the database (other parts of the backend, front end site, api, etc). Currently the method I have is a loop that checks for new entries in the database every 5 seconds (basically just a call to the table that looks for entries newer than the most recent entry I know of), and then I use the entry to perform actions that are non database related.
I was wondering if there was a better way to get new entries as opposed to constantly querying the database for something new. I was wondering if what I'm doing is fine, or if there's a better way to get new entries, preferably able to be built upon/with EF.
Thanks for any help!

If you want to notify your app as soon as any database records are inserted or updated or deleted and do some extra processing on them then you have two choices.
You can go with SqlDependency or SqlTableDependency. Both are used to notify the application when something on database changes. There is just one constraint where you must be able to enable the Broker for SQL server using ALTER DATABASE MyDatabase SET ENABLE_BROKER (This is important as some db doesn't support broker services i.e SQL Azure )
Here are some good links to explore both the approaches.
https://github.com/christiandelbianco/monitor-table-change-with-sqltabledependency
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/detecting-changes-with-sqldependency

Related

Sync framework 2.1 how to find changes?

How can I find changes in the database using Sync framework 2.1?
what is try to accomplish is:
I have different scopes in the database. When a user inserts or updates something in the application database(local), the application needs to sync with the server database. Is there a way to let Sync framework only sync the tables with changes? That will be a lot more efficient than this:
foreach (string scope in _scopenames)
{
StartSync(scope);
}
I can't just send the parameter with the tablename from the model class to the sync class because in that way only one table will sync. and you will not recieve the changes from other clients.
regardless of how many tables you have in a scope, only the tables that has changes cause a sync. if you want to be able to control specific tables only to sync, then you can create one scope per table.

c# update single db field or whole object?

This might seem like an odd question, but it's been bugging me for a while now. Given that i'm not a hugely experienced programmer, and i'm the sole application/c# developer in the company, I felt the need to sanity check this with you guys.
We have created an application that handles shipping information internally within our company, this application works with a central DB at our IT office.
We've recently switch DB from mysql to mssql and during the transition we decided to forgo the webservices previously used and connect directly to the DB using Application Role, for added security we only allow access to Store Procedures and all CRUD operations are handle via these.
However we currently have stored procedures for updating every field in one of our objects, which is quite a few stored procedures, and as such quite a bit of work on the client for the DataRepository (needing separate code to call the procedure and pass the right params for each procedure).
So i'm thinking, would it be better to simply update the entire object (in this case, an object represents a table, for example shipments) given that a lot of that data would be change one field at a time after initial insert, and that we are trying to keep the network usage down, as some of the clients will run with limited internet.
Whats the standard practice for this kind of thing? or is there a method that I've overlooked?
I would say that updating all the columns for the entire row is a much more common practice.
If you have a proc for each field, and you change multiple fields in one update, you will have to wrap all the stored procedure calls into a single transaction to avoid the database getting into an inconsistent state. You also have to detect which field changed (which means you need to compare the old row to the new row).
Look into using an Object Relational Mapper (ORM) like Entity Framework for these kinds of operations. You will find that there is not general consensus on whether ORMs are a great solution for all data access needs, but it's hard to argue that they solve the problem of CRUD pretty comprehensively.
Connecting directly to the DB over the internet isn't something I'd switch to in a hurry.
"we decided to forgo the webservices previously used and connect directly to the DB"
What made you decide this?
If you are intent on this model, then a single SPROC to update an entire row would be advantageous over one per column. I have a similar application which uses SPROCs in this way, however the data from the client comes in via XML, then a middleware application on our server end deals with updating the DB.
The standard practice is not to connect to DB over the internet.
Even for small app, this should be the overall model:
Client app -> over internet -> server-side app (WCF WebService) -> LAN/localhost -> SQL
DB
Benefits:
your client app would not even know that you have switched DB implementations.
It would not know anything about DB security, etc.
you, as a programmer, would not be thinking in terms of "rows" and "columns" on client side. Those would be objects and fields.
you would be able to use different protocols: send only single field updates between client app and server app, but update entire rows between server app and DB.
Now, given your situation, updating entire row (the entire object) is definitely more of a standard practice than updating a single column.
It's better to only update what you change if you know what you change (if using an ORM like entity Framework for example), but if you're going down the stored proc route then yes definately update everything in a row at once that's way granular enough.
You should take the switch as an oportunity to change over to LINQ to entities however if you're already in a big change and ditch stored procedures in the process whenever possible

Is EF or SQL the better choice to audit data changes?

The requirement seems simple: when data changes, audit the changes.
Here's some important pieces to the equation:
The Data in my application spans multiple tables (some cross ref. tables).
My DTO is deep, with Navigation Properties conditionally populated.
When loaded, I copy the original DTO with its "original values".
When saved is requested, the original DTO contains the changes.
Ideally, foreign keys will read like useful text not Id numbers.
Unlike TFS' cool history feature, mine seems more complicated because of the many related tables and conditional child entities.
I see three possibilities (so far):
I could use C# to reflect the objects and create a before/after record.
I could use triggers in SQL 2008R2 to catch changes and coalesce a before/after record.
I could store the raw before/after objects and let SQL 2008R2 parse them.
Please note: Right now, I seems to me that SQL 2008R2's CDC is far too heavy of an option. I am really looking for something I can build, but I admit my mind is open to anything right now.
My question
Before I get started building this:
How does everybody else handle auditing a complex EF DTO?
Is there a low(ish)-tech solution available?
Thank you in advance.
Related, but not-completely-related questions already on StackOverflow: Implementing Audit Log / Change History with MVC & Entity Framework and Create Data Audit in SQL Server and https://stackoverflow.com/questions/5773419/how-to-audit-many-to-many-relationship-in-entity-framework and Maintaining audit log for entities split across multiple tables and Linq to SQL Audit Trail / Audit Log: should I use triggers or doddleaudit? do not provide an answer.
IF audit is a real requirement I would opt for the trigger solution... since the other methods have several "shortcomings":
"blind" to any changes happening through other means than your application
if you make some code changes and forget about adding the audit code the audit trail gets "blind spots"
The trigger-based solution can be secured so that only special users can even see the audited data...
I usually work with Oracle but from my experience in such situations: allow the app only SELECT rights via Views , any insert/delete/update should be done via Stored procedures and audit trail should be done via triggers...
I've recently implemented an audit log manager on top of Entity Framework. When I instantiate my audit manager, I reflect all of the entity classes, and store the property information. Then within the object context SavingChanges event, I audit all of the changes. It works great. In the case of foreign keys, I just store their Id's before and after during changes.
The nice thing about this solution is that it doesn't require any extra coding. Once you create a log manager of sort, you don't have to worry about adding new triggers, or modifying triggers when new columns are added. Any changes to your entity classes will automatically be picked up when reflecting the classes.
Well, let's see. SQL Server auditing already exists, comes with tools, is probably already known by your DBAs, doesn't slow down your app, and can trace events that the application itself will never even see.
On the other hand, rolling your own in EF will allow you to audit non-SQL Server data sources. It also doesn't require EE.
Trigger Solution, Pros:
Cannot bypass the audit
Trigger Solution, Cons:
Cannot audit non SQL data
Cannot audit complex objects on insert
Entity Framework, Pros:
Can audit everything
Can audit complex objects in any state
Entity Framework, Cons:
Can be bypassed (like direct-to-SQL)
Requires a copy of original values
My choice is Entity Framework. Using STE makes it easier.
Either way you have to roll your own.

Merging databases - Identity column drop

I need to create a tool that is able to merge clients production databases.
Usually these databases will have the same schema (I'll do some check's later on, but for now we'll assume it is). Filtering of duplicate data is something for later on too.
This needs to be done automaticly(so no script generation via SSMS etc).
I've already had to start over again a couple of times because every time I ran into problems for things I didn't think off, so this time I wanted to ask you guys for advice before I begin all over again.
My current plan of action is:
Copy schema from database 1(later on I'll add some checks here for
when the schema is different
Loop over all tables and set all foreign key updates to cascade, and
set the order in which the tabledata needs to be inserted (so the
tables containing the PK's first, then the tables holding the FK's)
Loop every table in the correct order
Check in database 2 table for identity column, if so, retrieve the
current seed value from the corresponding table in database 1, drop
identity property on database 2 table and update each ID to newID =
currentID + seed(to avoid duplicate primary keys later on)
Generate insert script(SMO's Table.EnumScript) for database 1 table
Generate insert script(SMO's Table.EnumScript) for database 2 table
Execute every line in database 1 insert script on the new database
Execute every line in database 2 insert script(which now has primary
keys/identity field data that will follow the ones in database 1) on the new database
Go to next table
Everything was working when testing(disabling the identity property in SSMS, created a T-SQL script to update every row with the given seed,..)
But the problem now is automating this in C#, more specific the disabling of the identity property. There doesn't seem to be a clean solution for this. Creating a new table and rebuilding every constraint etc seems like the wrong way, because the only reason I need it is to cascade every FK so everything still points to the correct place..
Another way would be to delay the updating of the identity-column-data, and change it after script generation and before insertion in the new database. But then I'd need to know which data points to which other data, while everything is still in strings(insertscript)?
Any suggestions,thoughts or techniques on how to handle this?
I know about Red Gate 's SQL compare, and it is indeed wonderfull, but need to program it myself.
Using: SMO, SQL Server 2005 - 2008R2(no developers or enterprise edition on client servers), ADO.NET , C#, .NET framework 2.0, Visual Studio 2008
I am not sure exactly what you are trying to accomplish with your process here, but managing Database versions is something that I have a keen interest in.
Have a look at DBSourceTools ( http://dbsourcetools.codeplex.com ).
It is a utility to script an entire database to disk, including all foreign key constraints and data.
Using Deployment Targets, you will then be able to re-create these databases on another database server (usually local machine).
The tool will handle dependencies and large database tables using Sql Bulk insert - trying to generate a script with 50,000 insert statements will be a nightmare.
Have fun.
Disclaimer: I am involved in the http://dbsourcetools.codeplex.com project.

How can I run SQL upon Database creation with EF 4.1 CodeFirst?

I want to utilize Elmah in my MVC application to store error messages, and I want to store the exceptions in my application's database. To do that I need to run the included DDL to create the Elmah tables and stored procs.
However, since my development database is recreated whenever my model changes (Via EF CodeFirst) I need the DDL to be run any time the database is recreated.
How would I go about doing this? The only place I could think to put this would be to add calls to run the Sql in the Seed() overridden method in my DbInitializer, but it doesn't seem completely appropriate since I am not seeding elmah, I am creating the structure for the schema to be created.
What is the most appropriate way to apply the DDL upon database recreation?
Using Seed method is usual approach to place custom SQL to execute after database is created. Its main purpose is to fill some initial data but developers use it for creating indexes, constraints, etc. so you can place there anything you need.

Categories

Resources