I have a requirement that feels like it probably has a simpler solution with EF than what we're currently using.
Essentially, as an auditing requirement, for any entity that inherits from a given base class, I need to create both the entity's table itself, but also a table that's identical, but with 3 additional columns - a FK back to the original entity's table, a description (e.g. "Modified", "Added", "Deleted") and an XML column that will contain a serialized version of the state of the entity.
At present, we're manually adding the entities to create the audit tables (currently inherit from an AuditableEntity class and developers have to manually ensure that other fields match the original entity) and using migrations to add T-SQL triggers to the entity tables to update the data in the audit tables on any insert, update, delete.
I'd prefer if I could somehow get EF to automatically create/migrate the audit tables based on the entity tables without having to manually sync them, and likewise use an interceptor or something similar to update the audit table on insert/update/delete of an entity rather than using triggers. Does anyone know if this is possible, or done anything similar? In the past, the closest I've come is a single, common audit history table which wasn't too bad.
Disclaimer: I'm the owner of the project Entity Framework Plus
This project may answer to your requirement. You can access to all auditing information like entity name, action name, property name, original and current values, etc.
A lot of options is available like an AutoSave all information in the database.
// using Z.EntityFramework.Plus; // Don't forget to include this.
var ctx = new EntityContext();
// ... ctx changes ...
var audit = new Audit();
audit.CreatedBy = "ZZZ Projects"; // Optional
ctx.SaveChanges(audit);
// Access to all auditing information
var entries = audit.Entries;
foreach(var entry in entries)
{
foreach(var property in entry.Properties)
{
}
}
Documentation: EF+ Audit
You could create one table with the columns:
Id
TableName
Action (Add, update, delete)
IdOfRecord
XmlSerialized
DateChanges (use datetime2)
Then override SaveChanges() to write each change to that one table.
No need to mess around with keeping Audit table schema up to date when running migrations etc
Related
I am developping an application using EF6 with Fluent API and I have an issue to manage Many-To-Many relationship.
For some internal reasons the Join table has a specific format including 4 fields
- Left Id (FK)
- Right Id (FK)
- StartDate (dateTime)
- EndDate (datetime)
Deleting a link is in fact setting the EndDate as not null but i don't now how to configure it in EF6.
In an other hand when reading links the record with Not NULL EndDate shouldn't be considered.
Can you give me a solution ?
Thank you.
Join tables and EF
EF automates some things for you. For this, it uses convention-over-configuration. If you stick to the convention, you can skip on a whole lot of common configuration.
For example, if your entity has a property named Id, EF will inherently assume that this is the PK.
Similarly, if two entity types have nav props that refer to each other (and only one direct link between the two entities exists), then EF will automatically assume that these nav props are the two sides to a single many-to-many relationship. EF will make a join table in the database, but it will keep this hidden from you, and let you deal with the two entity types themselves.
For some internal reasons the Join table has a specific format including 4 fields - Left Id (FK) - Right Id (FK) - StartDate (dateTime) - EndDate (datetime)
Your join table no longer conforms to what the content of a conventional and automatically generated EF join table is. You are expecting a level of custom configurability that EF cannot provide based on blind convention, which means you have to explicitly configure this.
Secondly, the fact that you have these additional columns implies that you wish to use this data at some point (presumably to show the historical relations between two entities. Therefore, it doesn't make sense to rely on EF's automatic join tables as the join table and it content would be hidden from the application/developer.
It's possible that the second consideration is invalid for you, if you don't need the application to ever fetch the ended entries. But the overall point still stands.
The solution here is to make the join record an explicit entity of its own. In essence, you are not dealing with a many-to-many here, you are dealing with a specific entity (the join element) with two one-to-many relationships (one for each of the two entity types).
This enables you to achieve exactly what you want. Your expectation of what EF can automate for you simply doesn't apply in this case.
Soft delete
Deleting a link is in fact setting the EndDate as not null but i don't now how to configure it in EF6.
In general, this is known as "soft delete" behavior, albeit maybe slightly differently here. In a regular soft delete pattern, when an entry is deleted, the database secretly retains the entry but the application doesn't know that and doesn't see the entry again.
It's unclear if you intend for ended entries to still show up in the application, e.g. the relational history. If this is not the case, then your situation is exactly soft delete behavior.
This isn't something you configure on the model level, but rather something you override in your database's SaveChanges behavior. A simple example of how I implement a soft delete:
public override int SaveChanges()
{
// Get all entries of the change trackes (of a given type)
var entries = ChangeTracker.Entries<IAuditedEntity>().ToList();
// Filter the entries that are being deleted
foreach (var entry in entries.Where(entry.State == EntityState.Deleted))
{
// Change the entry so it instead updates the entry and does not delete it
entry.Entity.DeletedOn = DateTime.Now;
entry.State = EntityState.Modified;
}
return base.SaveChanges();
}
This allows you to prevent deletions to the entities that you want this to apply to, which is the safest way to implement a soft delete as this serves as a catch-all for database deletes coming from whichever consumer uses this db context.
The solution to your question is pretty much the same. Assuming you named your join entity (see previous chapter) JoinEntity:
public override int SaveChanges()
{
var entries = ChangeTracker.Entries<JoinEntity>().ToList();
// Filter the entries that are being deleted
foreach (var entry in entries.Where(entry.State == EntityState.Deleted))
{
// Change the entry so it instead updates the entry and does not delete it
entry.Entity.Ended = DateTime.Now;
entry.State = EntityState.Modified;
}
return base.SaveChanges();
}
Word of warning
Soft deletes tend to be a catch-all for all entities (or at least a significant chuck of your database). Therefore, it makes sense to catch this at the db context level as I did here.
However, if this entity is unique in that it is soft deleted, then this is more of a business logic implementation than it is a DAL-architecture. If you start writing many custom rules for different types of entities, the db context logic is going to get clutterend and it's not going to be nice to work with because you need to account for multiple possible operations happening during the SaveChanges.
Take note to not push what is supposed to be a business logic decision to the DAL. I can't draw this line for you, it depends on your context. But evaluate whether the db context is the best place to implement this behavior.
Can you give me a solution ?
If your linking table has extra columns you have to model it as an Entity, and the EndDate logic for navigation needs to be explicit. EF won't do any of that for you.
I use code first of Entity framework. There are two classes "Question" and "User". I defined a relationship as below:
this.HasRequired(v => v.Creator).WithMany(v => v.Questiones)
.HasForeignKey(v => v.CreatorId).WillCascadeOnDelete(false);
After gernerating the database I found that it always create foreign key between Id of User and CreatorId of Question. Because of lower performance of FK(and other reason),I want to define navigation property relationship without setting foreign key in database? Delete FK after EF created it?
If cannot do this using fluent api, could you tell me why EF designed in this way please?
About the lower performance of FK. I have a User table with 5 Million records in it. when I insert a Question into db, since the db check the question.CreatorId validation from User table, it always slower than without FK.
And there are many other reasons that I need to remove FK.
I think I am somewhat obsession because I think that deleting FK after created it is strangely and ugly. What i want is implementing this by using something like WithoutForeignKey in fluent api:
this.HasRequired(v => v.Creator).WithMany(v => v.Questiones)
.WithoutForeignKey(v => v.CreatorId).WillCascadeOnDelete(false);
Without questioning why are you trying to do this strange thing and going just to the answer: you could delete fk constraint after generated, or you could use migrations and remove FK generation from the migration code.
SQL code generated when traversing nav properties will work even if fk constraint doesn't exist, except for cascade deleting
If you want a relationship between two tables, you need to define a foreign key. No way around it. Even if you use Map() in fluent api, you can only hide the foreign key in your model, in the background EF will still use it and it will exist in the database.
Also I don't get what you mean by "performance" of foreign key? One extra (likely small) column won't make a difference. If you mean the navigation properties for the performance part, you can do 3 things:
Don't include them in your model
Make them non-virtual to disable lazy loading
Disable lazy loading all together with ctx.Configuration.LazyLoadingEnabled = false;
If you don't want to tell db about relation and treat both entities as not related (I wonder why), then just ignore these navigation properties and FK field. Note that you will be responsible for managing related entities: saving and loading them from db, updating ids etc
this.Ignore(q => q.Creator);
this.Ignore(q => q.CreatorId);
And you also need to ignore other side of relation, otherwise EF will generate FK column with default name Creator_CreatorId. So in Creator entity configuration:
this.Ignore(c => c.Questiones);
I'm using NHibernate and looking for a solution that will allow me to audit changes to all fields in entity. I want to be able to create a history table for every entity i.e. Users -> UsersHistory that will have same structure as Users table and additional fields such as operation type (update, delete), userid of user that made change, etc. I don't want to define such class for every entity. I'm looking for something like History<T> (i.e. History<User>) because these entries don't belong to my domain and will only be used to prepare list of changes made to the entity. I also think that it would be better to create inserts to these tables in code rather than creating sql triggers. Basically, I just need to create a copy of record in history table on update or delete and I want the insert to be generated by NHibernate. I will also need to read records from history tables - as I said these tables will consist of entity fields and some common history fields.
I cannot find guidance on how to create such solution. All I can find is adding UserModified, UpdatedTimestamp etc. if I already have such fields on entity. However, I need full history of entity not just the information who last changed the entry.
Thanks in advance for help.
There is cool, open source audit trail for NHibernate called nhibernate.envers https://bitbucket.org/RogerKratz/nhibernate.envers , so you do not have to reinvent the wheel.
It integrates transparently into NHibernate, no changes to your domain model or mappings.
It's as simple as, adding the reference and call:
var enversConf = new FluentConfiguration();
enversConf.Audit<User>();
nhConf.IntegrateWithEnvers(enversConf);
whereas nhConf is your NHibernate config object.
For every change on your object a new revision is created, you can ask Envers to retrieve a revision by calling:
var reader = AuditReaderFactory.Get(session);
var userInRevOne = reader.Find<User>(user.Id, 1);
or list all revisions etc. The revision data itself can be enriched with a username, userid, timestamp etc. (whatever you can think off).
EDIT:
And it is available at NuGet: http://nuget.org/packages/NHibernate.Envers
I think the best solution is using Event Listeners:
http://darrell.mozingo.net/2009/08/31/auditing-with-nhibernate-listeners/
I wrote something similar to above (modified after finding that blog) except I store the result in XML.
e.g:
public void OnPostUpdate(PostUpdateEvent updateEvent)
{
if (updateEvent.Entity is AuditItem)
return;
var dirtyFieldIndexes = updateEvent.Persister.FindDirty(updateEvent.State, updateEvent.OldState, updateEvent.Entity, updateEvent.Session);
var data = new XElement("AuditedData");
foreach (var dirtyFieldIndex in dirtyFieldIndexes)
{
var oldValue = GetStringValueFromStateArray(updateEvent.OldState, dirtyFieldIndex);
var newValue = GetStringValueFromStateArray(updateEvent.State, dirtyFieldIndex);
if (oldValue == newValue)
{
continue;
}
data.Add(new XElement("Item",
new XAttribute("Property", updateEvent.Persister.PropertyNames[dirtyFieldIndex]),
new XElement("OldValue", oldValue),
new XElement("NewValue", newValue)
));
}
AuditService.Record(data, updateEvent.Entity, AuditType.Update);
}
Audit Service just builds add some additional data such as IP Address, User (if any), was it a system/service update or actioned via a website or user, etc.
Then in my DB i Store the XML like:
<AuditedData>
<Item Property="Awesomeness">
<OldValue>above average</OldValue>
<NewValue>godly</NewValue>
</Item>
<Item Property="Name">
<OldValue>Phill</OldValue>
<NewValue>Phillip</NewValue>
</Item>
</AuditedData>
I also have Insert/Delete listeners.
What you are looking for are Event Listeners (unfortunately i cannot link to relevant docs because nhforge.org wiki is experiencing NRE...).
Check Complex NHibernate Auditing
Here's a complete example of how to do this: http://www.shawnduggan.com/?p=89.
Also covered in this post: Audit logging nhibernate
I have 2 entites:
Im my DB they look like:
Vehicles(Id, VehicleNumber, IsDeleted, WorkerId)
Workers(Id, Name, Address)
And in my edmx:
VehicleId: Id, VehicleNumber, IsDeleted, WorkerId, Worker
Workers: Id, Name, Address, VehiclesList
As you can see, Vehicles table contains soft deleted rows. Now when I get Worker with id=2, I got all his vehicles, including the one I soft deleted. How can I retrieve only the undeleted vehicles?
Badly. EF has very limited support for soft deletes. Actually the only possibility is using conditional mapping where you explicitly hardcode (it cannot be changed at runtime) to your mapping condition saying that you don't want to load entities having IsDeleted = 0. Check mapping details:
But it has very bad consequences:
IsDeleted column cannot be mapped - it already defines mapping internally
Your model can never be used to load soft deleted entities even if you want
The first problem can be solved by mapping stored procedure to delete operation for the Vehicle entity and the second problem can be solved by separate model for auditing and retrieving deleted entities.
Also conditional mapping is not supported by code first - it requires EDMX file.
is lazyloading enabled? then try to limit result set with a where:
worker.VehiclesList.Where(x=>!x.IsDeleted)
also you can put a condition to a vehicles table mapping in model desiner isdeleted = false. Soft deleted vehicles will not be retrived at all
I need to recreate a database with exactly the same values it has been originally created. So I need to add records with a pre-defined PK value. In this case, the PK is Identity in the database and when I try to define it's value, it is simply ignored, getting its value from the identity. No error is raised but the PK value that I supply is ignored.
example:
Category category = new Category()
{
CategoryID=1,
CategoryName="Beverages",
Description="Soft drinks, coffees, teas, beers, and ales"
};
ctx.Categories.Add(category);
ctx.SaveChanges();
Notes:
I'm using POCO, code first, so, I don´t have an EDMX Model to configure.
I don´t want to use ctx.Database.ExecuteSqlCommand(). I wish to maintain an Database agnostic approach.
In this case, the PK is Identity
In such case you should never manually insert its value. Once you set column as identity DB should be responsible for controlling the Id. Because of that there is no way to pass the value from EF (unless you want to break other functionality). You must use ExecuteSqlCommand and create complex SQL which will:
Turn on identity insert for the table
Insert record
Turn off identity insert for the table
Inserting value into identity column must be allowed by SET IDENTITY_INSERT tableName ON
I don't know if you scenario will let you do this, but if you define a composite key like as follows:
modelBuilder.Entity<Category>().HasKey(s => new { s.CategoryID, s.Name });
(using HasKey while running the DbContext.OnModelCreating method and EF 4.1 Code First), then you actually can control which values get inserted when you save the POCO object to the database.
I will say that, however, I would agree with Ladislav insofar as that the primary key values you are trying to maintain here are conceptually really more like data than record identifiers, and should be treated as such. Meaning, treat them as just data fields, and create a new primary key field on your POCO class in order to uniquely identify database records. e.g. for Category
public Int32 PK {get; set;}
and be sure to indicate it's intended to be the PK field from OnModelCreating
modelBuilder.Entity<Category>().HasKey(c => c.PK)