Suppose you have this table structure:
Patient -> PatientTag -> Tag
A typical N:M relationship between patients and tags, PatientTag being the intermediate entity with both FKs. (PatientId and TagId).
I want to remove a specific tag, I have its ID. I’m doing this but I’d like to know if there’s a better way, since these are the 1st methods I write using PLINQO, I wouldn’t want to create bad practices from the beginning.
using ( MyDataContext dc = DataContextFactory.GetDataContext() )
{
var options = new DataLoadOptions();
options.LoadWith<Paciente>(p => p.PacienteTagList);
options.LoadWith<PacienteTag>(pt => pt.Tag);
dc.LoadOptions = options;
// Get the Tag we're going to remove from the DB.
var tag = dc.Manager.Tag.GetByKey( idTag);
// Remove each patient from the association.
foreach ( Paciente pac in tag.PacienteList1 )
{
// we need to retrieve it, won’t let us use the ‘pac’ object.
var pax = dc.Manager.Paciente.GetByKey( pac.IdPaciente );
pax.TagList.Remove(tag);
}
// now remove the tag
dc.Manager.Tag.Delete(tag.TagId);
// And commit the changes
dc.SubmitChanges();
}
Thanks for any insight on the subject.
I agree with tvanfosson do it on the database. Another way (may be safer imho) would be to create a strored procedure and call it from you code. Make sure its all wrapped up in a transaction that can deal with rollbacks in case something goes wrong
What about simply using a foreign key with cascading delete, then deleting the tag itself and letting the database take care of deleting all the references. If you wanted to make sure that it wasn't in use, you could check that there aren't any patients associated with it first, though you may need to wrap it in a transaction if there are other processes accessing the same database.
Related
I am trying to perform a restore of data using NHibernate but I am getting all sorts of errors, from foreign key violations to primary key violations and everything in between.
To give some background, I created a "Base" class from which every class in my application inherits (please don't comment on this, this is what i need/want).
So to perform a backup, i simply call session.QueryOver<BaseClass>().List<BaseClass>() and I get all the data, serialize it to javascript, zip it and save it. That's how I create backups.
Now the restore....
I deserialize the backup with ease, get the right types and everything.
I've tried using session.save(item, item.Id), to put the items back with the same ID's as in the original database, but NHibernate doesn't seem to like this, especially when I have foreign keys between tables (or classes).
Browsing the internet, it seems my answers would lie with stateless sessions. I tried these, but I still get all sorts of errors.
One thing i tried was to wrap all the inserts in a try-catch, and retry until i no longer get errors. This sort of worked, but when i call session.Commit I get an error message with a lot of 'Violation of PRIMARY KEY constraint' messages. I have wrapped all of my inserts into 1 transaction (while writing this I am thinking to try take out the transaction).... Without the transaction it seems to have saved some of the data. I think I should have a transaction, as I want to be able to guarantee all or none of the data was restored, to make restores more reliable.
Using try-catch doesn't seem reliable, also it means I have to guess howmany times to retry the insert action on failed items.
One important note I want to add is that when my code is running, I know nothing about the classes or types other than they are of type BaseClass, with an Id field. So one class that is giving an error is a Menu class. It has a property which is List<Menu>-childMenus and another property of type Menu-parentMenu. These 2 properties are mapped using fluent nhibernate to be HasMany and References, this is how I believe these should be mapped. This is the sort of class that is causing problems for me, because NHibernate has created foreign keys. This is good in my opinion, except that now I can't do a restore easily.
If I don't get a suitable answer or figure this out soon, my solution will be to try and order the items to be restored in such a way that any item which "looks" like it might have a parent object (property of type BaseClass) with a foreign key, i will sort those items into a list and insert them last, and hopefully avoid foreign key constraint violations.
But I am hoping there are other alternatives.
Also, when I do the restore, the Id generator is set to assigned, so I don't think my problem has to do with unknown or invalid id's. In the original data my id's are GUID's. (I may change this to hilo integers later on, but one problem at a time).
Any help will be much appreciated.
Thanks in advance...
Unless I figure out a better alternative, my solution will involve brute forcing the data into the database, using code similar to the following:
var existingCount = 0l;
var lastCount = -1l;
while (existingCount < items.Count)
{
using (var session = factory.OpenSession())
{
existingCount = session.CreateCriteria<BaseClass>()
.SetProjection(Projections.RowCountInt64())
.List<long>()
.Sum();
session.Flush();
}
if (existingCount == items.Count)
{
break; // success
}
if (lastCount == existingCount)
{
throw new Exception("Error restoring backup, no change after retrying inserting new items.");
}
lastCount = existingCount;
try
{
using (var session = factory.OpenSession())
{
var existingItems = session.QueryOver<BaseClass>().List<BaseClass>().ToList();
SaveItemsToDb(existingItems, items, session); // checks if item already exists, if not, tries to save it. Also has some try-catch processing
session.Flush();
}
}
catch (Exception exception)
{
//Do nothing, just try again.
}
}
I've shown three programmers this problem and we're all stumped. I call a Sql Server stored procedure in a foreach loop and the result always is the same as the first call. Even if I hard code parameters (removing the loop) only the first result is assigned to all subsequent calls.
The stored procedure is called by an Entity Framework function import (EF4 database first using the designer). The calling code lives in a repository that is a class library. The repository is called by a separate Asp.net webforms project. The problem code looks like this:
IEnumerable<WorkOrder> orders = _context.GetWorkOrders(UserName, workOrder, customerCode).ToList();
OrderStatus lastStatus = new OrderStatus();
foreach (Order order in orders)
{
lastStatus = _context.GetOrderStatus(order.OrderNumber).FirstOrDefault();
order.LastOrderStatus = lastStatus.OrderStatus;
}
As you can see this is pretty basic stuff. Depending on the order numbers passed in I always get the result of the first order number in the loop. I've turned off Ajax (part of the Telerik controls I use) because that has caused baffling errors for me in the past. I really hope you can suggest a way to debug this problem! Thanks in advance.
EDIT: Daniel J.G.'s comment led me to this possible solution. Now I need to figure out how to apply Ladislav Mrnka's answer..."Try to call ExecuteFunction directly with MergeOption.OverwriteChanges."
I'm answering my own question (since no one else has after a few days). The problem is caused by the Entity Framework database first designer. It generates code that caches the first stored procedure result causing the bad results in subsequent calls.
As I mentioned in the edit to my question the fix involves replacing the default MergeOption parameter used by ExecuteFunction. You need to use MergeOption.OverwriteChanges instead of the default (which I believe is MergeOption.PreserveChanges).
You could change that parameter in the generated code but your changes would be lost each time the designer is rebuilt. Instead I simply copied the generated code to my repository class, changed the MergeOption to OverwriteChanges, and stopped using the generated code. The end result looks like this:
IEnumerable<WorkOrder> orders = _context.GetWorkOrders(UserName, workOrder, customerCode).ToList();
OrderStatus lastStatus = new OrderStatus();
foreach (Order order in orders)
{
ObjectParameter workOrderParameter;
if (wo.WorkOrder != null)
{
workOrderParameter = new ObjectParameter("WorkOrder", order.WorkOrder);
}
else
{
workOrderParameter = new ObjectParameter("WorkOrder", typeof(global::System.String));
}
lastStatus = _context.ExecuteFunction<OrderStatus>("GetOrderStatus", MergeOption.OverwriteChanges, workOrderParameter).FirstOrDefault();
if (status != null)
{
order.LastOrderStatus = status.OrderStatus;
}
}
I also see that there is a way you can modify the T4 template to make the generated code use the correct MergeOption parameter. I haven't tried it though. If you're interested take a look here.
I'm back with a second answer to my own question. Be sure the Entity Key is truly a unique identifier for each Entity!
In my case, the OrderStock Entity was missing the OrderID (along with StockID) as the Entity Key. Typically the designer culls the primary key fields from the database but I have a unique situation (where my entity is based on a view). Since I left off OrderID from the Entity Key I saw duplicate rows for a single OrderStock Entity.
When I marked OrderID Entity Key = True the duplicate problem went away.
As I've mentioned in a couple other questions, I'm currently trying to replace a home-grown ORM with the Entity Framework, now that our database can support it.
Currently, we have certain objects set up such that they are mapped to a table in our internal database and a table in the database that runs our website (which is not even in the same state, let alone on the same server). So, for example:
Part p = new Part(12345);
p.Name = "Renamed part";
p.Update();
will update both the internal and the web databases simultaneously to reflect that the part with ID 12345 is now named "Renamed part". This logic only needs to go one direction (internal -> web) for the time being. We access the web database through a LINQ-to-SQL DBML and its objects.
I think my question has two parts, although it's possible I'm not asking the right question in the first place.
Is there any kind of "OnUpdate()" event/method that I can use to trigger validation of "Should this be pushed to the web?" and then do the pushing? If there isn't anything by default, is there any other way I can insert logic between .SaveChanges() and when it hits the database?
Is there any way that I can specify for each object which DBML object it maps to, and for each EF auto-generated property which property on the L2S object to map to? The names often match up, but not always so I can't rely on that. Alternatively, can I modify the L2S objects in a generic way so that they can populate themselves from the EF object?
Sounds like a job for Sql Server replication.
You don't need to inter-connect the two together as it seems you're saying with question 2.
Just have the two separate databases with their own EF or L2S models and abstract them away using repositories with domain objects.
This is the solution I ended up going with. Note that the implementation of IAdvantageWebTable is inherited from the existing base class, so nothing special needed to be done for EF-based classes, once the T4 template was modified to inherit correctly.
public partial class EntityContext
{
public override int SaveChanges(System.Data.Objects.SaveOptions options)
{
var modified = this.ObjectStateManager.GetObjectStateEntries(EntityState.Modified | EntityState.Added); // Get the list of things to update
var result = base.SaveChanges(options); // Call the base SaveChanges, which clears that list.
using (var context = new WebDataContext()) // This is the second database context.
{
foreach (var obj in modified)
{
var table = obj.Entity as IAdvantageWebTable;
if (table != null)
{
table.UpdateWeb(context); // This is IAdvantageWebTable.UpdateWeb(), which calls all the existing logic I've had in place for years.
}
}
context.SubmitChanges();
}
return result;
}
}
I'm using NHibernate and looking for a solution that will allow me to audit changes to all fields in entity. I want to be able to create a history table for every entity i.e. Users -> UsersHistory that will have same structure as Users table and additional fields such as operation type (update, delete), userid of user that made change, etc. I don't want to define such class for every entity. I'm looking for something like History<T> (i.e. History<User>) because these entries don't belong to my domain and will only be used to prepare list of changes made to the entity. I also think that it would be better to create inserts to these tables in code rather than creating sql triggers. Basically, I just need to create a copy of record in history table on update or delete and I want the insert to be generated by NHibernate. I will also need to read records from history tables - as I said these tables will consist of entity fields and some common history fields.
I cannot find guidance on how to create such solution. All I can find is adding UserModified, UpdatedTimestamp etc. if I already have such fields on entity. However, I need full history of entity not just the information who last changed the entry.
Thanks in advance for help.
There is cool, open source audit trail for NHibernate called nhibernate.envers https://bitbucket.org/RogerKratz/nhibernate.envers , so you do not have to reinvent the wheel.
It integrates transparently into NHibernate, no changes to your domain model or mappings.
It's as simple as, adding the reference and call:
var enversConf = new FluentConfiguration();
enversConf.Audit<User>();
nhConf.IntegrateWithEnvers(enversConf);
whereas nhConf is your NHibernate config object.
For every change on your object a new revision is created, you can ask Envers to retrieve a revision by calling:
var reader = AuditReaderFactory.Get(session);
var userInRevOne = reader.Find<User>(user.Id, 1);
or list all revisions etc. The revision data itself can be enriched with a username, userid, timestamp etc. (whatever you can think off).
EDIT:
And it is available at NuGet: http://nuget.org/packages/NHibernate.Envers
I think the best solution is using Event Listeners:
http://darrell.mozingo.net/2009/08/31/auditing-with-nhibernate-listeners/
I wrote something similar to above (modified after finding that blog) except I store the result in XML.
e.g:
public void OnPostUpdate(PostUpdateEvent updateEvent)
{
if (updateEvent.Entity is AuditItem)
return;
var dirtyFieldIndexes = updateEvent.Persister.FindDirty(updateEvent.State, updateEvent.OldState, updateEvent.Entity, updateEvent.Session);
var data = new XElement("AuditedData");
foreach (var dirtyFieldIndex in dirtyFieldIndexes)
{
var oldValue = GetStringValueFromStateArray(updateEvent.OldState, dirtyFieldIndex);
var newValue = GetStringValueFromStateArray(updateEvent.State, dirtyFieldIndex);
if (oldValue == newValue)
{
continue;
}
data.Add(new XElement("Item",
new XAttribute("Property", updateEvent.Persister.PropertyNames[dirtyFieldIndex]),
new XElement("OldValue", oldValue),
new XElement("NewValue", newValue)
));
}
AuditService.Record(data, updateEvent.Entity, AuditType.Update);
}
Audit Service just builds add some additional data such as IP Address, User (if any), was it a system/service update or actioned via a website or user, etc.
Then in my DB i Store the XML like:
<AuditedData>
<Item Property="Awesomeness">
<OldValue>above average</OldValue>
<NewValue>godly</NewValue>
</Item>
<Item Property="Name">
<OldValue>Phill</OldValue>
<NewValue>Phillip</NewValue>
</Item>
</AuditedData>
I also have Insert/Delete listeners.
What you are looking for are Event Listeners (unfortunately i cannot link to relevant docs because nhforge.org wiki is experiencing NRE...).
Check Complex NHibernate Auditing
Here's a complete example of how to do this: http://www.shawnduggan.com/?p=89.
Also covered in this post: Audit logging nhibernate
I have the following situation:
Customers contain projects and projects contain licenses.
Good because of archiving we won't delete anything but we use the IsDeleted instead.
Otherweise I could have used the cascade deletion.
Owkay I work with the repository pattern so I call
customerRepository.Delete(customer);
But here starts the problem. The customer is set to isdeleted true. But then I would like to delete all the projects of that customer and each project that gets deleted should delete all licenses as well.
I would like to know if there is a proper solution for this.
It has to be performant though.
Take note that this is a simple version of the actual problem. A customer has also sites which are also linked to licenses but I just wanted to simplify the problem for you guys.
I'm working in a C# environment using sql server 2008 as database.
edit: I'm using enterprice libraries to connect to the database
One option would be to do this in the database with triggers. I guess another option would be use Cascade update, but that might not fit in with how your domain works.
Personally I'd probably just bite the bullet and write C# code to do the setting of IsDeleted type field for me (if there was one and only one app accessing the DB).
I recommend just writing a stored procedure (or group of stored procedures) to encapsulate this logic, which would look something like this:
update Customer set isDeleted = 1
where CustomerId = #CustomerId
/* Say the Address table has a foreign key to customer */
update Address set isDeleted = 1
where CustomerId = #CustomerId
/*
To delete related records that also have child data,
write and call other procedures to handle the details
*/
exec DeleteProjectByCustomer(#CustomerId)
/* ... etc ... */
Then call this procedure from customerRepository.Delete within a transaction.
This totally depends on your DAL. For instance NHibernate mappings can be setup to cascade delete all these associated objects without extra code. I'm sure EF has something similar. How are you connecting to your DB?
If your objects arent persisted, then the .NET GC will sweep all your project objects away once there is no reference to them. I presume from your question though that you are talking about removing them from the database?
If your relationships are fixed (i.e. a license is always related to a project, and a project to a customer), you can get away with not cascading the update at all. Since you're already dealing with the pain of soft deletes in your queries, you might as well add in the pain of checking the hierarchy:
SELECT [...] FROM License l
JOIN Project p ON l.ProjectID = p.ID
JOIN Customer c on p.CustomerID = c.ID
WHERE l.IsDeleted <> 1 AND p.IsDeleted <> 1 AND c.IsDeleted <> 1
This will add a performance burden only in the case where you have queries on a child table that don't join to the ancestor tables.
It has an additional merit over a cascading approach: it lets you undelete items without automatically undeleting their children. If I delete one of a project's licenses, then delete the project, then undelete the project, a cascading approach will lose the fact that I deleted that first license. This approach won't.
In your object model, you'd implement it like this:
private bool _IsDeleted;
public bool IsDeleted
{
get
{
return _IsDeleted || (Parent == null ) ? false : Parent.IsDeleted;
}
set
{
_IsDeleted = value;
}
}
...though you must be careful to actually store the private _IsDeleted value in the database, and not the value of IsDeleted.