I need to be able to write a logic that helps me export/import the whole database. Of course, the ids should be ignored when doing this so if I export the data, and then import it - the whole graph should be cloned.
The idea was to use simple binary serialization without any custom code - so I could serialize any graph of objects I want. But I stopped at NHibernate problem.
The thing is that this graph contains many objects that are actually different objects (different references) of the single persistent object. It is very difficult to fix that as the graph is very complex and I need to redo the whole application for that. So I have to live with this.
If I just save all the graph to the file and then deserialize it and try to save to DB as-is - these objects will have some ids assigned, so NHibernate will probably fall. I need to clear the ids. But if I do this - NHibernate stops knowing about the identity of each object, so every object is transient and, of course, is not equal.
Example:
I have User {Id = 3}
and Mail {Id = 2, with User(Id = 3)}
Two users here have the same Id - so they are equal. But not reference equal.
When I clear Ids of this graph - both users become different objects, as they are not reference-equal.
I was thinking - can I tell NHibernate somehow that even though the objects have ids (!= 0), but they are transient and should be inserted to the DB and they should receive new Ids. Or maybe you know another way of solving my problem.
P.S. All the objects are detached - when I say that they are persistent, I mean that they have Id != 0 and they had their copies in DB before exporting (it's possible that was another DB)
Update
I have added an example of the code I want to work. The SaveOrUpdate calls in the end should insert exactly one object each run. The actual code is a bit more complicated, but the thing is that I have a single hierarchy with s1 and s2 in it (two different objects which represent the single persistant object. Their Equal() == true, but ReferenceEqual == false) and I need to clone it and save it and ensure that the result object will be single in the database.
User s1;
User s2;
using (var session = DBHandler.GetSessionFactory().OpenSession())
{
s1 = session.Get<User>(1);
}
using (var session = DBHandler.GetSessionFactory().OpenSession())
{
s2 = session.Get<User>(1);
}
var c1 = (User)DBHandler.DeepClone(s1);
var c2 = (User)DBHandler.DeepClone(s2);
// These updates should insert only one object, because it is actually one object.
using (var session = DBHandler.GetSessionFactory().OpenSession())
{
session.SaveOrUpdate(c1);
}
using (var session = DBHandler.GetSessionFactory().OpenSession())
{
session.SaveOrUpdate(c2);
}
same as here a proof of concept
Configuration config;
ISessionFactory factory;
public object DeepClone(object original)
{
var metadata = factory.GetClassMetadata(original.GetType());
var clone = metadata.Instantiate(0 /*or extract unsaved value from config*/, EntityMode.Poco);
var values = metadata.GetPropertyValues(original, EntityMode.Poco);
for (int i = 0; i < metadata.PropertyTypes.Length; i++)
{
if (metadata.PropertyTypes[i].IsAssociationType && values[i] != null)
{
values[i] = DeepClone(values[i]);
}
if (metadata.PropertyTypes[i].IsCollectionType)
{
// TODO: Copy Collection
}
}
metadata.SetPropertyValues(clone, values, EntityMode.Poco);
return clone;
}
Related
I've been struggling for a while with a problem that consists on auditing generically database entities when they're saved. I have a project that uses EF 6 and it was required to me to create a "non-invasive" method to audit entities when they're added, modified or deleted. I have to store a JSON of the inserted entity, modified entity or deleted entity without interfering with the normal flow. The project has a Database First implementation.
My solution was simple, add a partial class of any entity that the rest of the programmers want to audit implementing IAudit which is basically an empty interface to get all changes from entities that implement it.
public interface IAudit {}
I have a Currencies entity that just implement it without any other code (I could do something else in the future but I don't need it)
public partial class Currencies : IAudit
I override the SaveChanges method to look for entities to audit
public override int SaveChanges()
{
ChangeTracker.DetectChanges();
// This linq looks for new entities that were marked for audit
CreateAuditLog(System.Data.Entity.EntityState.Added);
CreateAuditLog(System.Data.Entity.EntityState.Modified);
CreateAuditLog(System.Data.Entity.EntityState.Deleted);
return base.SaveChanges();
}
The solution calls 3 times the CreateAuditLog because in the near future I need to implement a configuration to audit whatever the user decides, might be from a database configuration that is activated/deactivated by users.
Everything worked perfectly, I was able to get saved entities in the specified state:
private void CreateAuditLog(System.Data.Entity.EntityState state)
{
var auditedEntities = ChangeTracker.Entries<IAudit>()
.Where(p => p.State == state)
.Select(p => p.Entity);
... some code that do something else
foreach (var auditedEntity in auditedEntities)
{
... some information I required to add
strJSON = JsonConvert.SerializeObject(auditedEntity, new EFNavigationPropertyConverter());
... some code to save audit information
}
}
The problem is I lose every value in the Deleted state, I only get the ID, there's no information in the properties except the ID and there is no any possibility of extract it in any way. I looked for every single solution in StackOverflow and other websites and there is nothing to recover the original information.
How can I get the previous deleted values to store them in the same way I'm storing Added and Modified entities?
It took me a couple of days to figure it out. Might be the solution is a bit complex but I tried several less complex options with not a good result.
First, as I'm just auditing Delete in a different way I separated Deleted state from Added and Modified that work well with no change. Deleted state is a particular case and I treat it like that.
First, I needed to obtain the original values from the database. In the Deleted state they're gone, there's not any possibility of recovering them from the entity. It's possible to obtain them with the following code:
var databaseValues = this.Entry(auditedEntity).GetDatabaseValues();
The result is just a collection of DB property values (DbPropertyValues). If I can get the original values I set the original values from the deleted entity:
dbEntityEntry.OriginalValues.SetValues(databaseValues);
This line just fills the entity original values, it doesn't modify the current value at all. It's useful to do it that way because it takes some code to check every property and set it ourselves, it's an interesting shortcut.
Now, the problem is I don't have the entity to serialize, so I need a new one which in my case I create by reflection because I don't know the type (I receive entities that implement IAudit)
Type type = auditedEntity.GetType();
var auditDeletedEntity = Activator.CreateInstance(type);
This is the entity I will serialize to store the audit later.
Now, the complex part, I need to get the entity properties and fill them by reflection from the original values set in the entity:
foreach (var propertyInfo in type.GetProperties())
{
if (!propertyInfo.PropertyType.IsArray && !propertyInfo.PropertyType.IsGenericType)
{
var propertyValue = originalValues.GetValue<object>(propertyInfo.Name);
auditDeletedEntity.GetType().InvokeMember(propertyInfo.Name,
BindingFlags.Instance | BindingFlags.Public | BindingFlags.SetProperty,
Type.DefaultBinder, auditDeletedEntity, new[] { propertyValue });
}
}
I had to check generic and array types to avoid following EF relations that are not going to work with this method and I also don't need (I need the object not the whole tree)
After that I simply need to serialize the audited deleted entity:
strJSON = JsonConvert.SerializeObject(auditDeletedEntity, new EFNavigationPropertyConverter());
The code looks like this:
string strJSON = string.Empty;
if (state == System.Data.Entity.EntityState.Deleted)
{
var databaseValues = this.Entry(auditedEntity).GetDatabaseValues();
// Get original values from the database (the only option, in the delete method they're lost)
DbEntityEntry dbEntityEntry = this.Entry(auditedEntity);
if (databaseValues != null)
{
dbEntityEntry.OriginalValues.SetValues(databaseValues);
var originalValues = this.Entry(auditedEntity).OriginalValues;
Type type = auditedEntity.GetType();
var auditDeletedEntity = Activator.CreateInstance(type);
// Get properties by reflection
foreach (var propertyInfo in type.GetProperties())
{
if (!propertyInfo.PropertyType.IsArray && !propertyInfo.PropertyType.IsGenericType)
{
var propertyValue = originalValues.GetValue<object>(propertyInfo.Name);
auditDeletedEntity.GetType().InvokeMember(propertyInfo.Name,
BindingFlags.Instance | BindingFlags.Public | BindingFlags.SetProperty,
Type.DefaultBinder, auditDeletedEntity, new[] { propertyValue });
}
}
strJSON = JsonConvert.SerializeObject(auditDeletedEntity, new EFNavigationPropertyConverter());
}
}
else
{
strJSON = JsonConvert.SerializeObject(auditedEntity, new EFNavigationPropertyConverter());
}
Might be there's a better way but I seriously spent a good amount of time looking for options and I couldn't find anything better.
Any suggestion or optimization is appreciated.
I am looking for help on an issue with NHibernate which has been bugging me for a while now. Long story short:
I’m looking for a way to, in the first level cache, “reset” a property on an entity each time I do an update or an insert.
What I want to achieve is that the property in question will always be considered to be dirty by NHibernate when using dynamic update or insert.
The backstory for this is that I know that, if the transaction was successful, the column that I want to “reset” will be set to Null in the database by a trigger. On the flip side, the first level cache does not know this, and thus NHibernate will think that the property was not updated when I set it to the same value as I did on the previous update/insert. The catch is that my trigger is dependent on this value being set. The resulting mess is that if I want to use dynamic update or insert I’m only able to update/insert an entity once without “refreshing” it afterwards (which I really don’t want to do).
Tips or help would be much appreciated, because I’ve really hit a wall here
NHibernate provides many places for extension. Among them is the Session IInterceptor. There is documentation with many details:
http://nhibernate.info/doc/nh/en/index.html#objectstate-interceptors
In this case, we can create our custom one, which will be observing our entity (for example Client) and a property which must be updated every time (for example Code). So our implementation could look like this:
public class MyInterceptor : EmptyInterceptor
{
public override int[] FindDirty(object entity, object id, object[] currentState, object[] previousState, string[] propertyNames, NHibernate.Type.IType[] types)
{
var result = new List<int>();
// we do not care about other entities here
if(!(entity is Client))
{
return null;
}
var length = propertyNames.Length;
// iterate all properties
for(var i = 0; i < length; i++)
{
var areEqual = currentState[i].Equals(previousState[i]);
var isResettingProperty = propertyNames[i] == "Code";
if (!areEqual || isResettingProperty)
{
result.Add(i); // the index of "Code" property will be added always
}
}
return result.ToArray();
}
}
NOTE: This is just an example! Apply your own logic for checking the dirty properties.
And we have to wrap Session this way:
var interceptor = new MyInterceptor()
_configuration.SetInterceptor(interceptor);
And this is it. While Client is marked as dynamic-update, the property Code will always be set as dirty
<class name="Client" dynamic-update="true" ...
I'm looking at a problem where I wish to get a collection from an expensive service call and then store it in cache so it can be used for subsequent operations on the UI. The code I'm using is as follows:
List<OrganisationVO> organisations = (List<OrganisationVO>)MemoryCache.Default["OrganisationVOs"];
List<Organisation> orgs = new List<Organisation>();
if (organisations == null)
{
organisations = new List<OrganisationVO>();
orgs = pmService.GetOrganisationsByName("", 0, 4000, ref totalCount);
foreach (Organisation org in orgs)
{
OrganisationVO orgVO = new OrganisationVO();
orgVO = Mapper.ToViewObject(org);
organisations.Add(orgVO);
}
MemoryCache.Default.AddOrGetExisting("OrganisationVOs", organisations, DateTime.Now.AddMinutes(10));
}
List<OrganisationVO> data = new List<OrganisationVO>();
data = organisations;
if (!string.IsNullOrEmpty(filter) && filter != "*")
{
data.RemoveAll(filterOrg => !filterOrg.DisplayName.ToLower().StartsWith(filter.ToLower()));
}
The issue I'm facing is that the data.RemoveAll operation affects the cached version. i.e. I want the cached version to always reflect the full dataset returned by the service call. I then want to retrieve this collection from cache whenever the filter is set and apply it but this should not change cached data - i.e. subsequent filters should happen on the full dataset - what is the best way to do this?
You need to make copy of the list if you want to use RemoveAll operation (ToList would be enough).
Also instead of modigying the list consider using LINQ operations like Where/Select.
I would either:
apply the filter dynamically and replace the filter if needed (so you cache the complete data but only return the cachedData.Where(currentFilter)
make two caches - one for the complete data and one for the filtered data - in this case the first one should only consist of the data returned from the service - no need to cache the VO-data as well
I have a console application with a few methods that:
insert data1 (customers) from db 1 to db 2
update data1 from db 1 to db 2
insert data2 (contacts) from db 1 to db 2
insert data2 from db 1 to db 2
and then some data from db 2 (accessed by web services) to db 1 (MySql), the methods are initialized on execution of the application.
With these inserts and updates I need to compare a field (country state) with a value in a list I get from a web service. To get the states I have to do:
GetAllRecord getAllStates = new GetAllRecord();
getAllStates.recordType = GetAllRecordType.state;
getAllStates.recordTypeSpecified = true;
GetAllResult stateResult = _service.getAll(getAllStates);
Record[] stateRecords = stateResult.recordList;
and I can then loop through the array and look for shortname/fullname with
if (stateResult.status.isSuccess)
{
foreach (State state in stateRecords)
{
if (addressState.ToUpper() == state.fullName.ToUpper())
{
addressState = state.shortname;
}
}
}
As it is now I have the code above in all my methods but it takes a lot of time to fetch the state data and I have to do it many times (about 40k records and the web service only let me get 1k at a time so I have to use a "searchNext" method 39 times meaning that I query the web service 40 times for the states in each method.
I guess I could try to come up with something but I'm just checking what best praxis would be? If I create a separate method or class how can I access this list with all its values many times without having to download them again?
Edit: should I do something like this:
GetAllRecord getAllStates = new GetAllRecord();
getAllStates.recordType = GetAllRecordType.state;
getAllStates.recordTypeSpecified = true;
GetAllResult stateResult = _service.getAll(getAllStates);
Record[] stateRecords = stateResult.recordList;
Dictionary<string, string> allStates = new Dictionary<string, string>();
foreach (State state in stateRecords)
{
allStates.Add(state.shortname, state.fullName);
}
I am not sure where to put it though and how to access it from my methods.
One thing first, you should add a break to your code when you get a match. No need to continue looping the foreach after you have a match.
addressState = state.shortname;
break;
40 thousand records isn´t necessarily that much in with todays computers, and I would definitely implement a cache of all the fullnames <-> shortname.
If the data don´t change very often this is a perfectly good approach.
Create a Dictionary with fullName as the key and shortName as the value. Then you can just do a lookup in the methods which needs to translate the full name to the short name. You could either store this list as a static variable accessible from other classes, or have it in an instance class which you pass to your other objects as a reference.
If the data changes, you could refresh your cache every so often.
This way you only call the web service 40 times to get all the data, and all other lookups are in memory.
Code sample (not tested):
class MyCache
{
public static Dictionary<string,string> Cache = new Dictionary<string,string>();
public static void FillCache()
{
GetAllRecord getAllStates = new GetAllRecord();
getAllStates.recordType = GetAllRecordType.state;
getAllStates.recordTypeSpecified = true;
GetAllResult stateResult = _service.getAll(getAllStates);
Record[] stateRecords = stateResult.recordList;
if (stateResult.status.isSuccess)
{
foreach (State state in stateRecords)
{
Cache[state.fullName.ToUpper()] = state.shortname;
}
}
// and some code to do the rest of the web service calls until you have all results.
}
}
void Main()
{
// initialize the cache
MyCache.FillCache();
}
and in some method using it
...
string stateName = "something";
string shortName = MyCache.Cache[stateName.ToUpper()];
An easy way would be (and you really should) to cache the data locally. If I understand you correctly you do the webservice check everytime something changes which is likely unneccessary.
An easy implementation (if you can't or don't want to change your original data structures) would be to use a Dictionary somewhat like:
Dictionary<String, String> cache;
cache[addressState] = state.shortname;
BTW: You REALLY should not be using ToUpper for case insensitive compares. Use String.Compare (a, b, StringComparison.OrdinalIgnoreCase) instead.
From what I gather all the first bit of code is in some form of loop, and because of which the following line (which internally does the call to the web service) is being called 40 times:
GetAllResult stateResult = _service.getAll(getAllStates);
Perhaps you should try moving the stateResult variable to a class level scope: make it a private variable or something. So at least it will be there for the life time of the object. In the constructor of the class or in some method, make a call to the method on the object which interfaces with the ws. If you've gone with writing a method, make sure you've called the method once before you execute your loop-logic.
Hence you wouldn't have to call the ws all the time, just once.
Model #1 - This model sits in a database on our Dev Server.
Model #1 http://content.screencast.com/users/Keith.Barrows/folders/Jing/media/bdb2b000-6e60-4af0-a7a1-2bb6b05d8bc1/Model1.png
Model #2 - This model sits in a database on our Prod Server and is updated each day by automatic feeds. alt text http://content.screencast.com/users/Keith.Barrows/folders/Jing/media/4260259f-bce6-43d5-9d2a-017bd9a980d4/Model2.png
I have written what should be some simple code to sync my feed (Model #2) into my working DB (Model #1). Please note this is prototype code and the models may not be as pretty as they should. Also, the entry into Model #1 for the feed link data (mainly ClientID) is a manual process at this point which is why I am writing this simple sync method.
private void SyncFeeds()
{
var sourceList = from a in _dbFeed.Auto where a.Active == true select a;
foreach (RivWorks.Model.NegotiationAutos.Auto source in sourceList)
{
var targetList = from a in _dbRiv.Product where a.alternateProductID == source.AutoID select a;
if (targetList.Count() > 0)
{
// UPDATE...
try
{
var product = targetList.First();
product.alternateProductID = source.AutoID;
product.isFromFeed = true;
product.isDeleted = false;
product.SKU = source.StockNumber;
_dbRiv.SaveChanges();
}
catch (Exception ex)
{
string m = ex.Message;
}
}
else
{
// INSERT...
try
{
long clientID = source.Client.ClientID;
var companyDetail = (from a in _dbRiv.AutoNegotiationDetails where a.ClientID == clientID select a).First();
var company = companyDetail.Company;
switch (companyDetail.FeedSourceTable.ToUpper())
{
case "AUTO":
var product = new RivWorks.Model.Negotiation.Product();
product.alternateProductID = source.AutoID;
product.isFromFeed = true;
product.isDeleted = false;
product.SKU = source.StockNumber;
company.Product.Add(product);
break;
}
_dbRiv.SaveChanges();
}
catch (Exception ex)
{
string m = ex.Message;
}
}
}
}
Now for the questions:
In Model #2, the class structure for Auto is missing ClientID (see red circled area). Now, everything I have learned, EF creates a child class of Client and I should be able to find the ClientID in the child class. Yet, when I run my code, source.Client is a NULL object. Am I expecting something that EF does not do? Is there a way to populate the child class correctly?
Why does EF hide the child entity ID (ClientID in this case) in the parent table? Is there any way to expose it?
What else sticks out like the proverbial sore thumb?
TIA
1) The reason you are seeing a null for source.Client is because related objects are not loaded until you request them, or they are otherwise loaded into the object context. The following will load them explicitly:
if (!source.ClientReference.IsLoaded)
{
source.ClientReference.Load();
}
However, this is sub-optimal when you have a list of more than one record, as it sends one database query per Load() call. A better alternative is to the Include() method in your initial query, to instruct the ORM to load the related entities you are interested in, so:
var sourceList = from a in _dbFeed.Auto .Include("Client") where a.Active == true select a;
An alternative third method is to use something call relationship fix-up, where if, in your example for instance, the related clients had been queried previously, they would still be in your object context. For example:
var clients = (from a in _dbFeed.Client select a).ToList();
The EF will then 'fix-up' the relationships so source.Client would not be null. Obviously this is only something you would do if you required a list of all clients for synching, so is not relevant for your specific example.
Always remember that objects are never loaded into the EF unless you request them!
2) The first version of the EF deliberately does not map foreign key fields to observable fields or properties. This is a good rundown on the matter. In EF4.0, I understand foreign keys will be exposed due to popular demand.
3) One issue you may run into is the number of database queries requesting Products or AutoNegotiationContacts may generate. As an alternative, consider loading them in bulk or with a join on your initial query.
It's also seen as good practice to use an object context for one 'operation', then dispose of it, rather than persisting them across requests. There is very little overhead in initialising one, so one object context per SychFeeds() is more appropriate. ObjectContext implements IDisposable, so you can instantiate it in a using block and wrap the method's contents in that, to ensure everything is cleaned up correctly once your changes are submitted.