i have a project here, were a big amount of data is read from different sources. In a special logic, a data/object-modell is build with these data. So as a result i retrieve a complete SQLite capable object model.
The data were previously written to the SQLite database using a simple:
_connection.InsertWithChildren(model, true);
But, since the source of the data became bigger, this is not possible anymore, cause the Insert method will throw an "too many variables" exception. ;(
Now, i am looking for an replacement for this method. The difficulty here is that within my model, i nearly always have Foreign-Keys in both directions. Parent has Childs, Childs knows Parent.
Performance is not an issue. I don't care if the function needs 10Seconds or 5Minutes. But does anyone have an idea how to handle the Insert, while all Foreign Keys are filled correctly?
If i use a simple
foreach(var entity in _entityList)
_connection.Insert(entity);
the foreign Keys (IDs) are all Guid.Empty;
best regards and cheers,
Chris
Until issue #64 is fixed you can use ReadOnly properties on lists.
For example:
public class Foo
{
[PrimaryKey]
public Guid Id { get; set; }
[OneToMany(ReadOnly = true)]
public List<Bar> Bars { get; set; }
}
public class Bar
{
[PrimaryKey]
public Guid Id { get; set; }
[ForeignKey(typeof(Foo))]
public Guid ParentId { get; set; }
[ManyToOne]
public Foo ParentFoo { get; set; }
}
Will no longer hit the variable limit issue regardless of the operation executed.
You can now insert the elements safely:
// Insert parent 'foo' element
// This won't insert the children or update their foreign keys
conn.InsertWithChildren(foo);
// Insert all children
// This will also update ParentId foreign key if ParentFoo property is set
conn.InsertAllWithChildren(bars)
Or use plain SQLite.Net methods assigning the foreign keys yourself:
conn.Insert(foo);
foreach (var bar in bars) {
bar.ParentId = foo.Id;
conn.Insert(bar);
}
Related
I'm still pretty new to these technologies. I've run into a small issue, and it's one that can be fixed by writing some lazy code...but OrmLite and ServiceStack streamline so many things, I'm wondering if there's a better way to do this.
So, I have a data model:
public class cctv_camera
{
[AutoIncrement]
public int I_id { get; set; }
public string I_sid { get; set; }
public string C_store_id { get; set; }
// .... others
}
This data model is mapped to a table, cctv_camera. There's another model (call it CamDetail) being sent to the client after some joins from this table. We are receiving back a CamDetail object from the client on a POST to save to the database and populating an instance of lp_cctv_camera with the data (new lp_cctv_camera().PopulateWith(CamDetail);).
Here's the thing: the I_sid column is a NOT NULL column with a default constraint that generates a hash for that row. It's something that the database is responsible for, so new items should not INSERT this column; it should be generated by the constraint.
Is there any way to db.Insert(lp_cctv_camera) while ignoring this column? I have tried the [Ignore] attribute on the definition, but we still need it in the definition to send existing I_sids out to the client. I really can't find anything in the docs. Any help is appreciated!
We've added an explicit [IgnoreOnInsert] attribute you can use to ignore specific properties on Insert which is available on v4.5.13 on MyGet.
Prior to v4.5.13 you can use the [Compute] attribute to get the similar behavior and ignore fields during inserts, e.g:
public class cctv_camera
{
[AutoIncrement]
public int I_id { get; set; }
[Compute]
public string I_sid { get; set; }
public string C_store_id { get; set; }
// .... others
}
I'm trying to find some help on an error I'm getting when using the .AddRange in EF 6. I'm getting the following error.
The changes to the database were committed successfully, but an error occurred while updating the object context.
The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because
the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are
unique before calling AcceptChanges.
As the error states, my records are actually getting added to the table so I don't know where fix the error.
Doing some research I found a bunch of posts where others say that it has to do with the .edmx file and a primary key on the table. Their suggestion is basically to add the PK and then rebuild the .edmx file. This doesn't fit my scenario for two reasons, one is that I'm using EF 6 with DataBase First so there isn't an .edmx file and second is that this is mapped to and Oracle 11 DB and so the identity is created with a trigger (which seems to work when I look at the added records).
Here is my code I'm using as well as the class for the entity.
using (APIcontext db = new APIcontext())
{
if (listLostTime.Count > 0)
{
db.GROUND_HOURS.AddRange(listLostTime);
db.SaveChanges();
}
}
And the entity class
[Table("GROUND_HOURS")]
public partial class GROUND_HOURS
{
[Key]
public decimal RID { get; set; }
[Required]
[StringLength(8)]
public string EMP_ID { get; set; }
[StringLength(2)]
public string COMPANY_CODE { get; set; }
public DateTime OCCURRENCE_DATE { get; set; }
[Required]
[StringLength(25)]
public string PAY_CODE { get; set; }
public decimal PAY_HOURS { get; set; }
public DateTime INSERT_DATE { get; set; }
}
I'm looking for any suggestions.
Decorate the RID property with the attribute DatabaseGenerated( DatabaseGeneratedOption.Identity )
The problem is that entity framework isn't updating the key value RID with the store generated value prior to accepting changes. In your case, with multiple GROUND_HOURS entities created, each will (presumably) have the default RID value of 0. When EF attempts to accept changes, it recognizes than more than one entity has the same key value and complains.
Thanks to #Moho who gave the ultimate fix. This is how I changed the primary key in my entity class to work and is what I used in my application.
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int RID { get; set; }
I was also able to fix it another way just to let others know. First off because this is and Oracle DB the RID (which is my Primary Key) was scaffold as a Decimal. That caused the RID to always be a 0 when I added my object to a list without specifically assigning it a value. To work around that I changed the RID property to an nullable INT and then when I created my list I set the RID=NULL.
[Key]
public int? RID { get; set; }
This is what I did when created my list.
foreach (var item in results)
{
GROUND_HOURS lostTime = new GROUND_HOURS();
lostTime.RID = null;
lostTime.EMP_ID = item.EmployeeId.ToString("D8");
lostTime.COMPANY_CODE = item.CompanyCode.Trim();
lostTime.OCCURRENCE_DATE = item.OccurrenceDate;
lostTime.PAY_CODE = item.PayCode.Trim();
lostTime.PAY_HOURS = item.Hours;
listLostTime.Add(lostTime);
}
I am trying to map a vertical inheritance between a base class and derived class (obviously). I am using code-only and the FluentAPI approach for which I have found almost ZERO documentation. I have found a couple of docs on vertical inheritance and code-only but very few on managing the discriminator column/value.
So I have been trying to extrapolate how to do it from a combination of this blog post and some documentation on implementing vertical inheritance using code-only. All to no avail.
You will see that I have a "Deliverables" base table and "PrintDeliverables" derives from that. There will be other derivatives coming down the road. But I figured I would start with one first.
Anyway, I naturally have models that map to the tables.
public class PrintDeliverable : BDeliverableBase
{
public String PaperItemNumber { get; set; }
public String PrinterModel { get; set; }
public Boolean? ColorOption { get; set; }
public String ProductCode { get; set; }
}
public class BDeliverableBase : BModelBase, IDeliverable, ISingleID
{
public Int64 ID { get; set; }
public String Label { get; set; }
public String Description { get; set; }
public IList<DeliverableAttribute> Attributes { get; set; }
public Int64 TypeID { get; set; }
public DeliverableType Type { get; set; }
}
public class DeliverableType : BModelBase, ISingleID
{
public Int64 ID { get; set; }
public String Label { get; set; }
public String Description { get; set; }
public IList<BDeliverableBase> Deliverables { get; set; }
}
I have standard mapping which maps the fields and types, sizes, etc. When I run it with no further additions I get the Error ...
Invalid Column name voa_class
My research uncovered that the ORM is attempting to update a "discriminator" column with a value that will tie the base table and table with the derived data together. I learned that I can change the name of the column it uses, which I did in the BASE CLASS mapping (BDeliverableBase). I changed it to use the "DeliverableTypeId" column since the DeliverableType indicates which TYPE of deliverable it is. Since each TYPE will have it's own derivative table this would be an appropriate value to associate which derivative table to use.
MappingConfiguration<BDeliverableBase> map = new MappingConfiguration<BDeliverableBase>();
map.HasDiscriminator().ToColumn("DeliverableTypeId");
It appears to like this better but it wants to insert this crazy number (ex// 612274703-854) into the DeliverableTypeId column which, of course, being a foreign key to the DeliverableTypes table is not allowed.
Insert of '612274703-' failed: Telerik.OpenAccess.RT.sql.SQLException: The INSERT statement conflicted with the FOREIGN KEY constraint "FK_DeliverableType". The conflict occurred in database "DB1", table "dbo.DeliverableTypes", column 'DeliverableTypeId'
I learned that OpenAccess/DataAccess generates a hash value to insert into the discriminator column. I do not want this, in fact I know that the value must be one of the IDs available in the DeliverableType. So I read in one of the docs that you could define what value to assign to the discriminator. The example applied a hard-coded value to the base class (dog and cat derived from animal) ...
animal.HasDiscriminatorValue("23");
This presented one problem ... I do not have a single value I can hard-code. It could one of MANY values present in the DeliverableTypes table. However, for the sake of proving out the concept I hard-coded the value of an existing record
MappingConfiguration<BDeliverableBase> map = new MappingConfiguration<BDeliverableBase>();
map.HasDiscriminator().ToColumn("DeliverableTypeId");
map.HasDiscriminatorValue("819");
I continued to get the identical error from before. So it doesn't appear that it was applying my hard-coded value. So ... I thought, while hard-coding the value is a little hacky it would make more sense to define that in the mapping for the derived class. That would resolve my hard-coded issue since ALL instances of that derived class WOULD indeed be of the same DeliverableTypeId. So I tried ...
MappingConfiguration<BDeliverableBase> map = new MappingConfiguration<BDeliverableBase>();
map.HasDiscriminator().ToColumn("DeliverableTypeId");
MappingConfiguration<PrintDeliverable> map = new MappingConfiguration<PrintDeliverable>();
map.HasDiscriminatorValue("819");
This resulted in the Error
Insert of '612274703-857' failed: Telerik.OpenAccess.RT.sql.SQLException: String or binary data would be truncated.
So I got a different error but still the same poblem. This type it was trying to stuff the ORM generated discriminator value (instead of my 819) into what I am assuming is my defined discriminator column (DeliverableTypeId), although the different error makes me suspicious that it was targeting a different column.(?)
In an effort to not drag this out too long I have tried several combinations of where to these "HasDiscriminator" and "HasDiscriminatorValue" assignments go but always end up with one or the other of these errors. So the question is ...
How, using code-only, do I map Vertical Inheritance using multiple, existing "type" values?
Most posts around the ObjectStateManager are true-duplicate issues based on unique primary keys. My problem is that my table does Not have a primary key, but it does have multiple foreign keys, one of which is Nullable.
class MyObject
{
int Key1;
int? Key2;
}
context.MyTable.Attach(new MyObject() { Key1 = 100; Key2 = null; });
context.MyTable.Attach(new MyObject() { Key1 = 100; Key2 = 2000; }); ****
It blows up on the second call, even though this is a unique row in the database.
Any thoughts on how to get around this? or enforce checking of BOTH keys?
As #BenAaronson mentioned, you should have a surrogate, primary key in your table in this instance. Entity Framework quite simply cannot deal with entities that have no primary key defined—in fact, I'm surprised your code even compiled/ran. Perhaps your real code with real class and property names caused EF to infer a primary key using its default conventions. For example:
public class MyClass
{
public int MyClassId { get; set; }
public int MyOtherClassId { get; set; }
}
In the code above, even without explicitly declaring it, EF would assume that the MyClassId property is the primary key for the class MyClass, even if that may not have been your intention.
If EF can't infer a primary key and one is not explicitly provided, then your code wouldn't compile (or at most, it wouldn't run).
So looking at your code, what appears to be happening is that EF inferred a primary key somehow (in your example above, Key1). You then tried to attach a new object to your context:
context.MyTable.Attach(new MyObject() { Key1 = 100; Key2 = null; });
This results in the context adding a new MyObject instance whose primary key value is 100 and whose Key2 property is null.
Next, you attempt to attach another item to the context:
context.MyTable.Attach(new MyObject() { Key1 = 100; Key2 = 2000; });
What this does is attempt to add a new item to the context whose primary key is 100, and this fails. This is because you already have an object being tracked by the context whose primary key value is 100 (executed by the first statement above).
Since you need to allow possibly null values for the Key2 property, you can't use a composite primary key, as you already stated. So you will need to follow #BenAaronson's advice and add a surrogate primary key:
public class Object
{
// Alternatively, you can use a mapping class to define the primary key
// I just wanted to make the example clear that this is the
// surrogate primary key property.
[Key]
private int ObjectID { get; set; } // IIRC, you can make this private...
public int Key1 { get; set; }
public int Key2 { get; set; }
}
Now, you can do the following:
context.MyTable.Add(new MyObject() { Key1 = 100, Key2 = null; });
context.MyTable.Add(new MyObject() { Key1 = 100, Key2 = 2000; });
Notice I used the Add method and not Attach. That's because when using Attach, the context is assuming that you're adding an object to the context which already exists in the database, but which was not brought into the context via a query; instead, you had a representation of it in memory, and at this point, you want the context to start tracking changes made to it and update the object in the database when you call context.SaveChanges(). When using the Attach property, the context adds the object in the Unmodified state. That's not what we want. We have brand new objects being added to the context. So we use Add. This tells the context to add the item in the Added state. You can make any changes you want to it. Since it's a new item, it will be in the Added state until you call context.SaveChanges() and the item is persisted to your data store, at which time, it's state will be updated to Unmodified.
One more thing to note at this point. If this is a "many-to-many" table, you should never need to manually add rows to this type of join table in EF (there are some caveats to this statement, see below). Instead, you should setup a mapping between the two objects whose relationship is many-to-many. It's possible to specify an optional many-to-many relationship, too. If the first object has no relationship to the second, there should be no row in the join table for the first object, and vice versa.
Regarding join table caveats as alluded to above: if your join-tables (i.e. many-to-many mapping tables) are simple (meaning the only columns in the table are those columns mapping one ID to the related ID), then you won't even see the join-table as part of your object model. This table is managed by EF in the background through navigation properties on the related objects. However, if the join-table contains properties other than just the ID properties of the related objects (and, this implies you have an existing database or explicitly structured your object model this way), then you will have an intermediate entity reference. For example:
public class A
{
public int ID { get; set; }
}
public class B
{
public int ID { get; set; }
}
public class AToB
{
// Composite primary key
[Key]
public int IdA { get; set; }
[Key]
public int IdB { get; set; }
public A SideA { get; set; }
public B SideB { get; set; }
// An additional property in the many-to-many join table
public DateTime Created { get; set; }
}
You would also have some mappings to tell EF how to wire up the foreign key relationships. What you'd wind up with in your object model then, is the following:
myA.AToB.SideB // Accesses the related B item to this A item.
myA.AToB.Created // Accesses the created property of AToB, telling you
// when the relationship between A and B was created.
In fact, if you have non-trivial join tables such as this example, EF will always include them in your object model when generating its model from an existing database.
I would strongly suggest that you check out Julie Lerman's and Rowan Miller's books on programming Entity Framework.
my first time on the site so apologies if it's tagged incorrectly or been answered elsewhere...
I keep running into particular situation on my current project and I was wondering how you guys would deal with it. The pattern is: a parent with a collection of children, and the parent has one or more references to particular items in the child collection, normally the 'default' child.
A more concrete example:
public class SystemMenu
{
public IList<MenuItem> Items { get; private set; }
public MenuItem DefaultItem { get; set; }
}
public class MenuItem
{
public SystemMenu Parent { get; set; }
public string Name { get; set; }
}
To me this seems like a good clean way of modelling the relationship, but causes problems immediately thanks to the circular association, I can't enforce the relationship in the DB because of the circular foreign keys, and LINQ to SQL blows up due to the cyclic association. Even if I could bodge my way round this, it's clearly not a great idea.
My only idea currently is to have an 'IsDefault' flag on MenuItem:
public class SystemMenu
{
public IList<MenuItem> Items { get; private set; }
public MenuItem DefaultItem
{
get
{
return Items.Single(x => x.IsDefault);
}
set
{
DefaultItem.IsDefault = false;
value.DefaultItem = true;
}
}
}
public class MenuItem
{
public SystemMenu Parent { get; set; }
public string Name { get; set; }
public bool IsDefault { get; set; }
}
Has anyone dealt with something similar and could offer some advice?
Cheers!
Edit: Thanks for the responses so far, perhaps the 'Menu' example wasn't brilliant though, I was trying to think of something representative so I didn't have to go into the specifics of our not-so-self-explanatory domain model! Perhaps a better example would be a Company/Employee relationship:
public class Company
{
public string Name { get; set; }
public IList<Employee> Employees { get; private set; }
public Employee ContactPerson { get; set; }
}
public class Employee
{
public Company EmployedBy { get; set; }
public string FullName { get; set; }
}
The Employee would definitely need a reference to their Company, and each Company could only have one ContactPerson. Hope this makes my original point a bit clearer!
The trick to solving this is to realize that the parent does not need to know about all of the methods of the child, and that the child does not need to know all the methods of the parent. Therefore you can use the Interface Segregation Principle to decouple them.
In short, you create an interface for the parent that has only those methods that the child needs. You also create an interface for the child that has only those methods that the parent needs. Then you have the parent contain a list of the child interfaces, and you have the child point back to the parent interface. I call this the Flip Flob Pattern because the UML diagram has the geometry of an Eckles-Jordan flip-flop (Sue me, I'm an old hardware engineer!)
|ISystemMenu|<-+ +->|IMenuItem|
A 1 \ / * A
| \/ |
| /\ |
| / \ |
| / \ |
| / \ |
|SystemMenu| |MenuItem|
Notice that there is not cycle in this diagram. You cannot start at one class and follow the arrows back to your starting point.
Sometimes, in order to get the separation just right, you have to move some methods around. There might be code that you thought should have been in the SystemMenu that you move to the MenuItem, etc. But in general the technique works well.
Your solution seems quite reasonable.
Another thing to think about is that your objects in memory don't have to exactly match the database schema. In the database you can have the simpler schema with the child properties, but in memory you can optimize things and have the parent with references to the child objects.
I don't really see your problem. Clearly you're using C#, which holds objects as references not instances. This means it's perfectly fine to have cross-referencing, or even self-referencing.
in C++ and other languages where objects are more compositied then you can have problems, which are typically solved using references or pointers, but C# should be fine.
More than likely your problem is that you're trying to follow all references somehow, leading to a circular reference. LINQ uses lazy loading to address this issue. For instance, LINQ won't load the Company or the Employee until you reference it. You just need to avoid following such references further than one level.
However, you can't really add two tables as each others foreign key, otherwise you would never be able to delete any record, since deleting an employee would require deleting the company first, but you can't delete the company without deleting the employee. Typically, in this case, you would only use one as a real foreign key, the other would simply be a psuedo-FK (that is, one that is used as an FK but doesn't have constraints enabled). You have to decide which is the more important relationship.
In the company example, you would likely want to delete the employee but not the company, so make the company->employee FK the constraint relationship. This prevents you from deleting the company if there are employees, but you can delete employees without deleting the company.
Also, avoid creating new objects in the constructor in these situations. For instance, if your Employee object creates a new Company object, which includes a new employee ojbect created for the employee, it will eventually exhaust memory. Instead, pass the objects already created to the constructor, or set them after construction, possibly by using an initalization method.
For instance:
Company c = GetCompany("ACME Widgets");
c.AddEmployee(new Employee("Bill"));
then, in AddEmployee, you set the company
public void AddEmployee(Employee e)
{
Employees.Add(e);
e.Company = this;
}
Maybe a self-referential GoF Composite pattern is an order here. A Menu has a collection of leaf MenuItems, and both have a common interface. That way you can compose a Menu out of Menus and/or MenuItems. The schema has a table with a foreign key that points back to its own primary key. Works with walking menus that way, too.
In code, you need to have references both ways to reference things both ways. But in the database, you only need the reference one way to make things work. Because of the way joins work, you only need to have the foreign key in one of your tables. When you think about it, every foreign key in your database could be flipped around, and create and create a circular reference. Best to just pick one record, in this case probably the child with a foreign key to the parent, and just be done.
In a domain driven design sense way, you can choose to avoid bidirectional relations between entities where it's possible. Choose one "aggregate root" to hold the relations, and use the other entity only when navigation from the aggregate root. I try to avoid bidirectional relations where it's possible. Because of YAGNI, and it will make you ask the question "what was first, the chicken or the egg?" Sometimes you will still need bidirectional associations, then choose one of the solutions mentioned earlier.
/// This is the aggregate root
public class Company
{
public string Name { get; set; }
public IList<Employee> Employees { get; private set; }
public Employee ContactPerson { get; set; }
}
/// This isn't
public class Employee
{
public string FullName { get; set; }
}
You can enforce foreign keys in the database where two tables refer to each other. Two ways come to mind:
The default child column in the parent is initially null and is only updated once all the child rows have been inserted.
You defer constraint checking until commit time. This means you can insert first the parent with an initially broken reference to the child, then insert the child. One problem with deferred constraint checking is that you can end up with database exceptions being thrown at commit time which is often inconvenient in many db frameworks. Also, it means you need to know the primary key of the child before you insert it which may be awkward in your setup.
I've assumed here that the parent menu item lives in one table and the child in a different table but the same solution would work if they are both in the same table.
Many DBMS's support deferred constraint checking. Possibly yours does too although you don't mention which DBMS you are using
Thanks to all who answered, some really interesting approaches! In the end I had to get something done in a big hurry so this is what I came up with:
Introduced a third entity called WellKnownContact and corresponding WellKnownContactType enum:
public class Company
{
public string Name { get; set; }
public IList<Employee> Employees { get; private set; }
private IList<WellKnownEmployee> WellKnownEmployees { get; private set; }
public Employee ContactPerson
{
get
{
return WellKnownEmployees.SingleOrDefault(x => x.Type == WellKnownEmployeeType.ContactPerson);
}
set
{
if (ContactPerson != null)
{
// Remove existing WellKnownContact of type ContactPerson
}
// Add new WellKnownContact of type ContactPerson
}
}
}
public class Employee
{
public Company EmployedBy { get; set; }
public string FullName { get; set; }
}
public class WellKnownEmployee
{
public Company Company { get; set; }
public Employee Employee { get; set; }
public WellKnownEmployeeType Type { get; set; }
}
public enum WellKnownEmployeeType
{
Uninitialised,
ContactPerson
}
It feels a little cumbersome but gets around the circular reference issue, and maps cleanly onto the DB which saves trying to get LINQ to SQL to do anything too clever! Also allows for multiple types of 'well known contacts' which is definitely coming in the next sprint (so not really YAGNI!).
Interestingly, once I came up with the contrived Company/Employee example it made it MUCH easier to think about, in contrast to the fairly abstract entities that we're really dealing with.