I have two objects... and if I compile a program with either one, it works fine, but when they both exist in the same program, I get the exception...
"Entities in 'ObjectContext.UnitSet' participate in the 'Sheet_Statistics' relationship. 0 related 'Sheet' were found. 1 'Sheet' is expected."
class Unit
{
public int Id;
public string Name;
}
class Template
{
public int Id;
public virtual ICollection<Unit> Units
}
class Sheet
{
public int Id;
public virtual ICollection<Unit> Units
}
Then their configurations..
TemplateConfiguration : EntityConfiguration
// ....
//// map the collection entity
HasMany(k => k.Units).WithRequired()
.Map("template.units",
(template, unit) => new
{
Template = template.Id,
Unit = unit.Id
});
SheetConfiguration : EntityConfiguration
// ....
//// map the collection entity
HasMany(k => k.Units).WithRequired()
.Map("sheet.units",
(sheet, unit) => new
{
Sheet = sheet.Id,
Unit = unit.Id
});
UnitConfiguration : EntityConfiguration<Unit>
//
// Initialize the Primary Key
HasKey(k => k.Id);
// Initialize that the Key Increments as an Identity
Property(k => k.Id).IsIdentity();
var templates = new List<Template>
{
new Template
{
Name = // ..,
Units = new List<Unit>
{
new Unit
{
// ...
}
}
}
};
templates.ForEach(x =>
{
context.Templates.Add(x);
});
context.SaveChanges(); // <-- Exception Happens Here, I never even get to try to add Sheets.
I'm taking a stab at this because without seeing all your code, I can't solve much more of it. I think your problem is that you're creating Units but not setting some sort of Sheet property (you need to provide all your entity/config code). You need to create the Sheet and Unit both before you can save the Unit or Sheet since they have a required reference (hence the error you're getting). If you provide more code I'll be able to refine my answer better.
Related
I have setup an entity which looks like this:
public partial class Test
{
public Test()
{
}
public int Id { get; set; }
public virtual Test2 Test2 { get; set; }
//and some other fields
}
test2 is another test object, with no navigation properties:
public partial class Test2
{
public int Id {get;set;}
public string Name {get;set;}
}
I don't have a model builder setup in the context, it just has DbSet properties for test and test2.
And I'm attempting to update a row in the database from an unattached object that I create based on user input. In this test case the Test2 property is null, and I am trying to update it to 3:
static void Main(string[] args)
{
using(var context = new db.Model1())
{
var test = new db.Test();
// we know id 1 exists for this simple test case
test.Id = 1;
// fake user input is 3 for test2 id
test.Test2 = context.Test2.Where(x => x.Id == 3).FirstOrDefault();
context.Test.Attach(test);
context.Entry(test).State = System.Data.Entity.EntityState.Modified;
context.SaveChanges();
}
}
The issue is that this doesn't change the Test_Id column in the database to 3. What am I missing?
I have determined that if instead of creating the test object using new(), I query it from the context and then update, it works fine (shown below):
var test = context.Test.FirstOrDefault(x => x.Id == 2);
test.Test2 = context.Test2.FirstOrDefault(x => x.Id == 3);
context.SaveChanges();
Why does my first approach not work if I am attaching the object to the context and setting its state to modified? Can I get away with not having a separate test2_id field for the foreign key relationship, which seems redundant?
How do I configure mappings for bidirectional mapping in the scenario of X references Y, Y has many Xs, so that when I add many Xs to the Y instance and save Y instance, it'll work properly?
to make the problem clear, here's the code:
ClientCompany model has HasMany ContactPerson related models:
public class ClientCompany
{
// ....
public virtual ISet<ContactPerson> ContactPeople { get; set; }
}
// mapping:
HasMany(x => x.ContactPeople)
.AsSet()
.Inverse()
.Cascade.All();
the ContactPerson has not-nullable "ClientCompany" field referencing the parent company:
public class ContactPerson
{
// ....
public virtual ClientCompany ClientCompany { get; set; }
}
References(x => x.ClientCompany).Not.Nullable();
calling this code:
sess.Save(new ClientCompany()
{
ContactPeople = new HashSet<ContactPerson>()
{
new ContactPerson()
}
});
causes this exception:
not-null property references a null or transient value
xxxx.ContactPerson.ClientCompany
this is just simplified case, I'm using AutoMapper in my real-world project so setting the reference manually is not the solution
The most important thinkg in ORM world, namely when using NHibernate, is:
ALWAYS set both sides of relation.
This is valid way, how to set both sides:
var person = new ContactPerson();
var company = new ClientCompany
{
ContactPeople = new HashSet<ContactPerson>()
{
person,
},
};
person.ClientCompany = company
sess.Save(company);
By mapping is the Person declared as responsible for its relation (the inverse mapping .Inverse()).
And that means, that it must have set the relation to its parent. This relation is used later for INSERT statement... and it cannot be null.
There is no need to save company and person separatelly. The only essential and absolutely important thing is to set both reference sides. That is a MUST.
Not sure what you can do with auto mapper... but some kind of wrapping/shielding of the bi-directional setting could be some special method on the POCO object:
public class ClientCompany
{
// ....
public virtual ISet<ContactPerson> ContactPeople { get; set; }
public virtual AddPerson(ContactPerson person)
{
ContactPeople = ContactPeople ?? new HashSet<ContactPerson>();
ContactPeople.Add(person);
person.ClientCompany = this;
}
}
Now we just have to call AddPerson() at the right time...
This is my first time using Entity Framework 6.1 (code first). I keep running into a problem where my navigation properties are null when I don't expect them to be. I've enabled lazy loading.
My entity looks like this:
public class Ask
{
public Ask()
{
this.quantity = -1;
this.price = -1;
}
public int id { get; set; }
public int quantity { get; set; }
public float price { get; set; }
public int sellerId { get; set; }
public virtual User seller { get; set; }
public int itemId { get; set; }
public virtual Item item { get; set; }
}
It has the following mapper:
class AskMapper : EntityTypeConfiguration<Ask>
{
public AskMapper()
{
this.ToTable("Asks");
this.HasKey(a => a.id);
this.Property(a => a.id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
this.Property(a => a.id).IsRequired();
this.Property(a => a.quantity).IsRequired();
this.Property(a => a.price).IsRequired();
this.Property(a => a.sellerId).IsRequired();
this.HasRequired(a => a.seller).WithMany(u => u.asks).HasForeignKey(a => a.sellerId).WillCascadeOnDelete(true);
this.Property(a => a.itemId).IsRequired();
this.HasRequired(a => a.item).WithMany(i => i.asks).HasForeignKey(a => a.itemId).WillCascadeOnDelete(true);
}
}
Specifically, the problem is that I have an Ask object with a correctly set itemId (which does correspond to an Item in the database), but the navigation property item is null, and as a result I end up getting a NullReferenceException. The exception is thrown in the code below, when I try to access a.item.name:
List<Ask> asks = repo.GetAsksBySeller(userId).ToList();
List<ReducedAsk> reducedAsks = new List<ReducedAsk>();
foreach (Ask a in asks)
{
ReducedAsk r = new ReducedAsk() { id = a.id, sellerName = a.seller.username, itemId = a.itemId, itemName = a.item.name, price = a.price, quantity = a.quantity };
reducedAsks.Add(r);
}
Confusingly, the seller navigation property is working fine there, and I can't find anything I've done differently in the 'User' entity, nor in its mapper.
I have a test which recreates this, but it passes without any problems:
public void canGetAsk()
{
int quantity = 2;
int price = 10;
//add a seller
User seller = new User() { username = "ted" };
Assert.IsNotNull(seller);
int sellerId = repo.InsertUser(seller);
Assert.AreNotEqual(-1, sellerId);
//add an item
Item item = new Item() { name = "fanta" };
Assert.IsNotNull(item);
int itemId = repo.InsertItem(item);
Assert.AreNotEqual(-1, itemId);
bool success = repo.AddInventory(sellerId, itemId, quantity);
Assert.AreNotEqual(-1, success);
//add an ask
int askId = repo.InsertAsk(new Ask() { sellerId = sellerId, itemId = itemId, quantity = quantity, price = price });
Assert.AreNotEqual(-1, askId);
//retrieve the ask
Ask ask = repo.GetAsk(askId);
Assert.IsNotNull(ask);
//check the ask info
Assert.AreEqual(quantity, ask.quantity);
Assert.AreEqual(price, ask.price);
Assert.AreEqual(sellerId, ask.sellerId);
Assert.AreEqual(sellerId, ask.seller.id);
Assert.AreEqual(itemId, ask.itemId);
Assert.AreEqual(itemId, ask.item.id);
Assert.AreEqual("fanta", ask.item.name);
}
Any help would be extremely appreciated; this has been driving me crazy for days.
EDIT:
The database is SQL Server 2014.
At the moment, I have one shared context, instantiated the level above this (my repository layer for the db). Should I be instantiating a new context for each method? Or instantiating one at the lowest possible level (i.e. for every db access)? For example:
public IQueryable<Ask> GetAsksBySeller(int sellerId)
{
using (MarketContext _ctx = new MarketContext())
{
return _ctx.Asks.Where(s => s.seller.id == sellerId).AsQueryable();
}
}
Some of my methods invoke others in the repo layer. Would it better for each method to take a context, which it can then pass to any methods it calls?
public IQueryable<Transaction> GetTransactionsByUser(MarketContext _ctx, int userId)
{
IQueryable<Transaction> buyTransactions = GetTransactionsByBuyer(_ctx, userId);
IQueryable<Transaction> sellTransactions = GetTransactionsBySeller(_ctx, userId);
return buyTransactions.Concat(sellTransactions);
}
Then I could just instantiate a new context whenever I call anything from the repo layer: repo.GetTransactionsByUser(new MarketContext(), userId);
Again, thanks for the help. I'm new to this, and don't know which approach would be best.
Try to add
Include call in your repository call:
public IQueryable<Ask> GetAsksBySeller(int sellerId)
{
using (MarketContext _ctx = new MarketContext())
{
return _ctx.Asks
.Include("seller")
.Include("item")
.Where(s => s.seller.id == sellerId).AsQueryable();
}
}
Also, there is an extension method Include which accepts lambda expression as parameter and provides you type checks on compile time
http://msdn.microsoft.com/en-us/data/jj574232.aspx
As for the context lifespan, your repositories should share one context per request if this is a web application. Else it's a bit more arbitrary, but it should be something like a context per use case or service call.
So the pattern would be: create a context, pass it to the repositories involved in the call, do the task, and dispose the context. The context can be seen as your unit of work, so no matter how many repositories are involved, in the end one SaveChanges() should normally be enough to commit all changes.
I can't tell if this will solve the lazy loading issue, because from what I see I can't explain why it doesn't occur.
But although if I were in your shoes I'd like to get to the bottom of it, lazy loading is something that should not be relied on too much. Take a look at your (abridged) code:
foreach (Ask a in asks)
{
ReducedAsk r = new ReducedAsk()
{
sellerName = a.seller.username,
itemName = a.item.name
};
If lazy loading would work as expected, this would execute two queries against the database for each iteration of the loop. Of course, that's highly inefficient. That's why using Include (as in Anton's answer) is better anyhow, not only to circumvent your issue.
A further optimization is to do the projection (i.e. the new {) in the query itself:
var reducedAsks = repo.GetAsksBySeller(userId)
.Select(a => new ReducedAsk() { ... })
.ToList();
(Assuming – and requiring – that repo.GetAsksBySeller returns IQueryable).
Now only the data necessary to create ReducedAsk will be fetched from the database and it prevents materialization of entities that you're not using anyway and relatively expensive processes as change tracking and relationship fixup.
I've been working on this problem for a week now and I'm getting so frustrated with EF. First off I have a super table -> sub table pattern going on in the database. It was designed with a code-first approach. The super type is called the WorkflowTask and is defined as follows:
<!-- language: c# -->
public abstract class WorkflowTask
{
public int WorkflowTaskId { get; set; }
public int Order { get; set; }
public WorkflowTaskType WorkflowTaskType { get; set; }
public WorkFlowTaskState State { get; set; }
public ParentTask ParentTask { get; set; }
public WorkflowDefinition WorkflowDefinition { get; set; }
}
An example sub task would inherit from this task and provide additional properties:
<!-- language: c# -->
public class DelayTask : WorkflowTask
{
public int Duration { get; set; }
}
This is mapped to the database as follows:
<!-- language: c# -->
public class WorkflowTaskEntityConfiguration : EntityTypeConfiguration<WorkflowTask>
{
public WorkflowTaskEntityConfiguration()
{
HasKey(w => w.WorkflowTaskId);
Property(w => w.WorkflowTaskId).HasColumnName("Id");
Property(w => w.Order).HasColumnName("Order");
Property(w => w.WorkflowTaskType).HasColumnName("TaskTypeId");
Property(w => w.State).HasColumnName("TaskStateId");
HasOptional(c => c.ParentTask).WithMany()
.Map(c => c.MapKey("ParentTaskId"));
}
}
The delay task is mapped as follows:
<!-- language: c# -->
public class DelayTaskEntityConfiguration : EntityTypeConfiguration<DelayTask>
{
public DelayTaskEntityConfiguration()
{
Property(d => d.WorkflowTaskId).HasColumnName("DelayTaskId");
Property(d => d.Duration).HasColumnName("Duration");
}
}
Hopefully you get the idea. Now I have another sub type called a container task. This task will hold other tasks and can potentially hold other container tasks. Here is what it looks like as well as the mapping:
<!-- language: c# -->
public class ContainerTask : ParentTask
{
public ContainerTask()
{
base.WorkflowTaskType = WorkflowTaskType.Container;
base.ParentTaskType = ParentTaskType.ContainerTask;
}
public List<WorkflowTask> ChildTasks { get; set; }
}
public class ContainerTaskEntityConfiguration : EntityTypeConfiguration<ContainerTask>
{
public ContainerTaskEntityConfiguration()
{
Property(x => x.WorkflowTaskId).HasColumnName("ContainerTaskId");
HasMany(c => c.ChildTasks).WithMany()
.Map(c => c.ToTable("ContainerTaskChildren", WorkflowContext.SCHEMA_NAME)
.MapLeftKey("ContainerTaskId")
.MapRightKey("ChildTaskId"));
}
}
And to make sure I include everything; here is the ParentTask object as well as it's mapping:
<!-- language: c# -->
public abstract class ParentTask : WorkflowTask
{
public ParentTaskType ParentTaskType {get; set;}
}
public class ParentTaskEntityConfiguration : EntityTypeConfiguration<ParentTask>
{
public ParentTaskEntityConfiguration()
{
Property(w => w.WorkflowTaskId).HasColumnName("ParentTaskId");
Property(w => w.ParentTaskType).HasColumnName("ParentTaskTypeId");
}
}
Now the item I'm trying to save is the WorkflowDefinition object. It will execute a bunch of tasks in order. It is defined as follows:
<!-- language: c# -->
public class WorkflowDefinition
{
public int WorkflowDefinitionId { get; set; }
public string WorkflowName { get; set; }
public bool Enabled { get; set; }
public List<WorkflowTask> WorkflowTasks { get; set; }
}
public class WorkflowDefinitionEntityConfiguration :
EntityTypeConfiguration<WorkflowDefinition>
{
public WorkflowDefinitionEntityConfiguration()
{
Property(w => w.WorkflowDefinitionId).HasColumnName("Id");
HasMany(w => w.WorkflowTasks)
.WithRequired(t=> t.WorkflowDefinition)
.Map(c => c.MapKey("WorkflowDefinitionId"));
Property(w => w.Enabled).HasColumnName("Enabled");
Property(w => w.WorkflowName).HasColumnName("WorkflowName");
}
}
So with all that defined I am passing a WorkflowDefinition object into my data repository layer and want to save it using EF. Since the object has lost it's context while working on it in the UI; I have to re-associate it so that it knows what to save. Within the UI I can add new tasks to the workflow, edit existing tasks as well as delete tasks. If there was only one level of tasks (definition => tasks) this would be cake. My problem lies with there being the possibility of infinite levels (definition => tasks => childtasks => childtasks, etc...).
Currently I retrieve the existing workflow from the database and assign the values over (workflow is the value being passed in and is of type WorkflowDefinition):
<!-- language: c# -->
// retrieve the workflow definition from the database so that it's within our context
var dbWorkflow = context.WorkflowDefinitions
.Where(w => w.WorkflowDefinitionId ==workflow.WorkflowDefinitionId)
.Include(c => c.WorkflowTasks).Single();
// transfer the values of the definition to the one we retrieved.
context.Entry(dbWorkflow).CurrentValues.SetValues(workflow);
I then loop through the list of tasks and either add them to the definition or find them and set their values. I added a function to the WorkflowTask object called SetDefinition which sets the WorkflowDefinition to the workflow within the context (previously I would get a key error because it thought the parent workflow was a different one even though the Ids matched). If it`s a container I run a recursive function to try and add all the children to the context.
<!-- language: c# -->
foreach (var task in workflow.WorkflowTasks)
{
task.SetDefinition(dbWorkflow);
if (task.WorkflowTaskId == 0)
{
dbWorkflow.WorkflowTasks.Add(task);
}
else
{
WorkflowTask original = null;
if (task is ContainerTask)
{
original = context.ContainerTasks.Include("ChildTasks")
.Where(w => w.WorkflowTaskId == task.WorkflowTaskId)
.FirstOrDefault();
var container = task as ContainerTask;
var originalContainer = original as ContainerTask;
AddChildTasks(container, dbWorkflow, context, originalContainer);
}
else
{
original = dbWorkflow.WorkflowTasks.Find(t => t.WorkflowTaskId ==
task.WorkflowTaskId);
}
context.Entry(original).CurrentValues.SetValues(task);
}
}
The AddChildTasks function looks like this:
<!-- language: c# -->
private void AddChildTasks(ContainerTask container, WorkflowDefinition workflow,
WorkflowContext context, ContainerTask original)
{
if (container.ChildTasks == null) return;
foreach (var task in container.ChildTasks)
{
if (task is ContainerTask)
{
var subContainer = task as ContainerTask;
AddChildTasks(subContainer, workflow, context, container);
}
if (task.WorkflowTaskId == 0)
{
if (container.ChildTasks == null)
container.ChildTasks = new List<WorkflowTask>();
original.ChildTasks.Add(task);
}
else
{
var originalChild = original.ChildTasks
.Find(t => t.WorkflowTaskId == task.WorkflowTaskId);
context.Entry(originalChild).CurrentValues.SetValues(task);
}
}
}
To delete tasks Ive found Ive had to do a two step process. Step 1 involves going through the original definition and marking tasks that are no longer in the passed in definition for deletion. Step 2 is simply setting the state for those tasks as deleted.
<!-- language: c# -->
var deletedTasks = new List<WorkflowTask>();
foreach (var task in dbWorkflow.WorkflowTasks)
{
if (workflow.WorkflowTasks.Where(t => t.WorkflowTaskId ==
task.WorkflowTaskId).FirstOrDefault() == null)
deletedTasks.Add(task);
}
foreach (var task in deletedTasks)
context.Entry(task).State = EntityState.Deleted;
Here is where I run into problems. If I delete a container I get a constraint error because the container contains children. The UI holds all changes in memory until I hit save so even if I deleted the children first it still throws the constraint error. I`m thinking I need to map the children differently, maybe with a cascade delete or something. Also, when I loop through the tasks in the delete loop, both the container and child get flagged for deletion when I only expect the container to be flagged and the child to be deleted as a result.
Finally, the save portion above took me a good week to figure out and it looks complicated as hell. Is there an easier way to do this? I'm pretty new to EF and I'm starting to think it would be easier to have the code generate SQL statements and run those in the order I want.
This is my first question here so I apologize for the length as well as the formatting... hopefully it's legible :-)
One suggestion, and I have not used it in the situation where there is this endless possible recursion, but if you want the cascade on delete to work natively and it looks like all tasks belong directly owned by some parent task you could approach it by defining an Identifying Relationship:
modelBuilder.Entity<WorkFlowTask>().HasKey(c => new {c.WorkflowTaskID,
c.ParentTask.WofkflowTaskId});
This question is relevant: Can EF automatically delete data that is orphaned, where the parent is not deleted?
Edit: And this link: https://stackoverflow.com/a/4925040/1803682
Ok, it took me way longer than I had hoped but I think I finally figured it out. At least all my tests are passing and all the definitions I've been saving have been working so far. There might be some combinations that I haven't thought about but for now I can at least move on to something else.
First off, I switched passing in my object as a tree and decided to flatten it out. This just means that all tasks are visible from the root but I still need to set parent properties as well as child properties.
foreach (var task in workflow.WorkflowTasks)
{
taskIds.Add(task.WorkflowTaskId); //adding the ids of all tasks to use later
task.SetDefinition(dbWorkflow); //sets the definition to record in context
SetParent(context, task); //Attempt to set the parent for any task
if (task.WorkflowTaskId == 0)
{
// I found if I added a task as a child it would duplicate if I added it
// here as well so I only add tasks with no parents
if (task.ParentTask == null)
dbWorkflow.WorkflowTasks.Add(task);
}
else
{
var dbTask = dbWorkflow.WorkflowTasks.Find(t => t.WorkflowTaskId == task.WorkflowTaskId);
context.Entry(dbTask).CurrentValues.SetValues(task);
}
}
The SetParent function has to check if a task has a parent and make sure the parent isn't a new task (id == 0). It then tries to find the parent in the context version of the definition so that I don't end up with duplicates (ie - if the parent is not referenced it tries to add a new one even though it exists in the database). Once the parent is identified I check it's children to see if that task is already there, if not I add it.
private void SetParent(WorkflowContext context, WorkflowTask task)
{
if (task.ParentTask != null && task.ParentTask.WorkflowTaskId != 0)
{
var parentTask = context.WorkflowTasks.Where(t => t.WorkflowTaskId == task.ParentTask.WorkflowTaskId).FirstOrDefault();
var parent = parentTask as ParentTask;
task.ParentTask = parent;
if (parentTask is ContainerTask)
{
var container = context.ContainerTasks.Where(c => c.WorkflowTaskId == parentTask.WorkflowTaskId).Include(c => c.ChildTasks).FirstOrDefault() as ContainerTask;
if (container.ChildTasks == null)
container.ChildTasks = new List<WorkflowTask>();
var childTask = container.ChildTasks.Find(t => t.WorkflowTaskId == task.WorkflowTaskId
&& t.Order == task.Order);
if(childTask == null)
container.ChildTasks.Add(task);
}
}
}
One thing you will notice in the SetParent code is that I'm searching for a task by the ID and the Order. I had to do this because if I added two new children to a container both of the Ids would be zero and the second one wouldn't get added since it found the first one. Each task has a unique order so I used that to further differentiate them.
I don't feel super great about this code but I've been working on this problem for so long and this works so I'm going to leave it for now. I hope I covered all the information, I'm not too sure how many people will actually need this but you never know.
We are having an intermittent problem with NHibernate where it will occasionally generate a query with a wrong column on the SQL. If we restart the application the problem ceases to happen (sometimes it requires more than one restart). When the problem occurs, during the lifetime of that process, it always produces the wrong SQL for the affected entity. It´s not always the same affected entity.
It´s an ASP.NET application where the SessionFactory is created during the Application_Start event. All the configuration and mapping are done by code.
We don´t have any more ideas how to test or debug the application, and I´m starting to assume there´s some bug in NHibernate, since the application fixes itself upon restart. Any ideas/tips will be much appreciated!
Here´s an example:
Entity
namespace Example.Clinicas
{
public partial class Clinica : Entidade // Abstract base class that has a property Handle
{
public virtual string Ddd { get; set; }
public virtual string Ddd2 { get; set; }
public virtual long? Duracao { get; set; }
public virtual string Numero { get; set; }
public virtual string Numero2 { get; set; }
public virtual string Prefixo { get; set; }
public virtual string Prefixo2 { get; set; }
public virtual long? HandlePrestador { get; set; }
public virtual Example.Prestadores.Prestador Prestador { get; set; }
}
}
Mapping
namespace Example.Clinicas.Mappings
{
public class ClinicaMapping : ClassMapping<Clinica>
{
public ClinicaMapping()
{
Table("CLI_CLINICA");
Id(x => x.Handle, map =>
{
map.Column("HANDLE");
map.Generator(Generators.Sequence, g => g.Params(new { sequence = "SEQ_AUTO1816" }));
});
Property(x => x.Ddd, map => map.Column( c=>
{
c.Name("DDD1");
c.Length(4);
}));
Property(x => x.Ddd2, map => map.Column( c=>
{
c.Name("DDD2");
c.Length(4);
}));
Property(x => x.Duracao, map => map.Column("INTERVALOAGENDA"));
Property(x => x.Numero, map => map.Column( c=>
{
c.Name("NUMERO1");
c.Length(5);
}));
Property(x => x.Numero2, map => map.Column( c=>
{
c.Name("NUMERO2");
c.Length(5);
}));
Property(x => x.Prefixo, map => map.Column( c=>
{
c.Name("PREFIXO1");
c.Length(5);
}));
Property(x => x.Prefixo2, map => map.Column( c=>
{
c.Name("PREFIXO2");
c.Length(5);
}));
Property(x => x.HandlePrestador, map => map.Column("PRESTADOR"));
ManyToOne(x => x.Prestador, map =>
{
map.Column("PRESTADOR");
map.Insert(false);
map.Update(false);
});
}
}
}
Command
Session.Query<Clinica>().FirstOrDefault();
Generated SQL
select HANDLE489_,
DDD2_489_,
DDD3_489_,
INTERVAL4_489_,
NUMERO5_489_,
NUMERO6_489_,
PREFIXO7_489_,
FATURADE8_489_,
PRESTADOR489_
from (select clinica0_.HANDLE as HANDLE489_,
clinica0_.DDD1 as DDD2_489_,
clinica0_.DDD2 as DDD3_489_,
clinica0_.INTERVALOAGENDA as INTERVAL4_489_,
clinica0_.NUMERO1 as NUMERO5_489_,
clinica0_.NUMERO2 as NUMERO6_489_,
clinica0_.PREFIXO1 as PREFIXO7_489_,
clinica0_.FATURADEPARCELAMENTO as FATURADE8_489_,
clinica0_.PRESTADOR as PRESTADOR489_
from CLI_CLINICA clinica0_)
where rownum <= 1
Exception
ORA-00904: "CLINICA0_"."FATURADEPARCELAMENTO": invalid identifier
Interesting Observations:
It is more likely to affect bigger entities (that has a higher number of properties), but also affects smaller entities occasionally;
The generated SQL always have the same number of columns as mapped properties;
The columns on the SQL are in the same order as the mapped properties on the mapping class;
The wrong column will replace an existing one;
The wrong column is a valid column in a different mapped entity;
There is no relationship between the affected entity and the one that has the wrong column;
Other Details:
.NET Version: 4.0
NHibernate Version: 3.3.3.400
Mapping by Code: NHibernate.Mapping.ByCode
Configuration by Code: NHibernate.Cfg
Load Mappings
var mapper = new ModelMapper();
foreach (var assembly in resolver.GetAssemblies()) // resolver is a class that gets all the assemblies for the current application
mapper.AddMappings(assembly.GetExportedTypes());
var mapping = mapper.CompileMappingForAllExplicitlyAddedEntities();
return mapping;
SessionFactory Configuration
var configure = new Configuration();
configure.DataBaseIntegration(x =>
{
x.Dialect<Oracle10gDialect>(); // Custom class
x.ConnectionString = ConnectionString;
x.BatchSize = 100;
x.Driver<OracleMultiQueryDataClientDriver>(); // Custom class
x.MaximumDepthOfOuterJoinFetching = 10;
x.Timeout = 250;
x.PrepareCommands = true;
x.HqlToSqlSubstitutions = "true 'S', false 'N', yes 'S', no 'N'";
x.LogFormattedSql = true;
x.LogSqlInConsole = true;
x.AutoCommentSql = true;
x.IsolationLevel = IsolationLevel.ReadCommitted;
x.ConnectionProvider<ConnectionProvider>(); // Custom class
});
configure.Properties.Add(new KeyValuePair<string, string>("hibernate.command_timeout", "250"));
configure.Proxy(x => x.ProxyFactoryFactory<NHibernate.Bytecode.DefaultProxyFactoryFactory>());
configure.LinqToHqlGeneratorsRegistry<LinqToHqlGeneratorsRegistry>();
configure.CurrentSessionContext<NHibernate.Context.WebSessionContext>();
var mapping = GetMappings(); // Method showed above
mapping.autoimport = false;
configure.AddMapping(mapping);
var listener = new AuditEventListener();
configure.EventListeners.PostInsertEventListeners = new IPostInsertEventListener[] { listener };
configure.EventListeners.PostUpdateEventListeners = new IPostUpdateEventListener[] { listener };
configure.SessionFactory().GenerateStatistics();
return configure;
I asked the same question on the NHibernate Users Google Groups forum, and someone thinks they have worked out the root cause (and have also proposed a solution):
https://groups.google.com/forum/#!topic/nhusers/BZoBoyWQEvs
The problem code is in PropertyPath.Equals(PropertyPath) which attempts to determine equality by only using the hash code. This works fine for smaller code bases as the default Object.GetHashCode() returns a sequential object index. However, after garbage collection, these indices get reused as finalized objects are removed and new objects are created...which results in more than one object getting the same hashcode...Once garbage collection kicks in, property paths have a chance to share the same hashcode which means they will ultimately mix up their customizers for the colliding properties, thus the wrong column names...
If you want to fix this the bug, you can patch the NH source code:
If you have your own copy of the NH source, you can fix the bug by changing NHibernate/Mapping/ByCode/PropertyPath.cs line #66 from:
return hashCode == other.GetHashCode();
To:
return hashCode == other.GetHashCode() && ToString() == other.ToString();
Please check out the Google Group for full details of the issue.
it looks like the "creditcard payments" FATURADEPARCELAMENTO is a property on your "lender" object PRESTADOR, if this is the case it needs to be a reference and NOT a property in the mapping. Hope that helps or at least gets you pointed in the correct direction
the reference would take the place of your line
Property(x => x.HandlePrestador, map => map.Column("PRESTADOR"));
and would be something close to
References(x => x.HandlePrestador)
Check your querylog to see what type of query its runnig, in your sql from there you, can spot the problem.