The situation is that I have a table that models an entity. This entity has a number of properties (each identified by a column in the table). The thing is that in the future I'd need to add new properties or remove some properties. The problem is how to model both the database and the corresponding code (using C#) so that when such an occasion appears it would be very easy to just "have" a new property.
In the beginning there was only one property so I had one column. I defined the corresponding property in the class, with the appropriate type and name, then created stored procedures to read it and update it. Then came the second property, quickly copy-pasted, changed name and type and a bit of SQL and there it was. Obviously this is not a suitable model going forward. By this time some of you might suggest an ORM (EF or another) because this will generate the SQL and code automatically but for now this is not an option for me.
I thought of having only one procedure for reading one property (by property name) and another one to update it (by name and value) then some general procedures for reading a bunch or all properties for an entity in the same statement. This may sound easy in C# if you consider using generics but the database doesn't know generics so it's not possible to have a strong typed solution.
I would like to have a solution that's "as strongly-typed as possible" so I don't need to do a lot of casting and parsing. I would define the available properties in code so you don't go guessing what you have available and use magic strings and the like. Then the process of adding a new property in the system would only mean adding a new column to the table and adding a new property "definition" in code (e.g. in an enum).
It sounds like you want to do this:
MyObj x = new MyObj();
x.SomeProperty = 10;
You have a table created for that, but you dont want to keep altering that table when you add
x.AnotherProperty = "Some String";
You need to normalize the table data like so:
-> BaseTable
RecordId, Col1, Col2, Col3
-> BaseTableProperty
PropertyId, Name
-> BaseTableValue
ValueId, RecordId, PropertyId, Value
Your class would look like so:
public class MyObj
{
public int Id { get; set; }
public int SomeProperty { get; set; }
public string AnotherProperty { get; set; }
}
When you create your object from your DL, you enumerate the record set. You then write code once that inspect the property as the same name as your configuration (BaseTableProperty.Name == MyObj.<PropertyName> - and then attempt the type cast to that type as you enumerate the record set.
Then, you simply add another property to your object, another record to the database in BaseTableProperty, and then you can store values for that guy in BaseTableValue.
Example:
RecordId
========
1
PropertyId Name
========== ====
1 SomeProperty
ValueId RecordId PropertyId Value
======= ======== ========== =====
1 1 1 100
You have two result sets, one for basic data, and one joined from the Property and Value tables. As you enumerate each record, you see a Name of SomeProperty - does typeof(MyObj).GetProperty("SomeProperty") exist? Yes? What it it's data type? int? Ok, then try to convert "100" to int by setting the property:
propertyInfo.SetValue(myNewObjInstance, Convert.ChangeType(dbValue, propertyInfo.PropertyType), null);
For each property.
Even if you said you cannot use them, that is what most ORM do. Depending on which one you use (or even create if it's a learning experience), they will greatly vary in complexity and performance. If you prefer a light weight ORM, check Dapper.Net. It makes use of generics as well, so you can check the code, see how it works, and create your own solution if needed.
Related
I'm trying to get all the Hotfix and include all the details (associated with it) where the property Available is 1. This is my code:
public static IList<HotFix> GetAllHotFix()
{
using (Context context = new Context())
{
return context.HotFix
.Include(h => h.AssociatedPRs)
.Include(h => h.Detail.Where(d => d.Available = 1))
.ToList();
}
}
And I'm getting that error. I tried using .ThenInclude but couldn't solve it.
Inside HotFix I have:
[Required]
public virtual List<HotFixDetail> Detail { get; set; }
Although you forgot to write your class definitions, it seems that you have a HotFix class. Every HotFix has a sequence of zero or more AssociatedPRs and a sequence of zero or more Details.
Ever Detail has at least one numeric property Available.
You want all HotFixes, each with all its AssociatedPRs, and all Details that have a property Available value equal to 1 (didn't you mean that available is a Boolean?)
When using entity framework, people tend to use include to get an item with its sub-items. This is not always the most efficient method, as it gets the complete row of a table, inclusive all the properties that you do not plan to use.
For instance, if you have a one-to-many relationship, Schools with their Students, then each Student will have a foreign key to the School that this `Student attends.
So if School [10] has 1000 Students, then every Student will have a foreign key to the School with a value 10. If you use Include to fetch School [10] with its Students, then this foreign key value is also selected, and sent a 1000 times. You already know it will equal the Schools primary key value, hence it is a waste of processing power to transport this value 10 a 1001 times.
When querying data, always use Select, and Select only the properties you actually plan to use. Only use Include if you plan to update the fetched data.
Another good advice is to use plurals to describe sequences and singulars to describe one item in your sequence
Your query will be:
var result = context.HotFixes.Select(hotfix => new
{
// Select only the hotfix properties you actually plan to use:
Id = hotfix.Id,
Date = hotfix.Date,
...
AssociatedPRs = hotfix.AssociatedPRs.Select(accociatedPr => new
{
// again, select only the associatedPr properties that you plan to use
Id = associatedPr.Id,
Name = associatedPr.Name,
...
// foreign key not needed, you already know the value
// HotFixId = associatedPr.HotFixId
})
.ToList(),
Details = hotfix.Details
.Where(detail => detail.Available == 1)
.Select(detail => new
{
Id = detail.Id,
Description = detail.Description,
...
// not needed, you know the value:
// Available = detail.Available,
// not needed, you know the value:
// HotFixId = detail.HotFixId,
})
.ToList(),
});
I used anonymous type. You can only use it within the procedure in which the anonymous type is defined. If you need to return the fetched data, you'll need to put the selected data in a class.
return context.HotFixes.Select(hotfix => new HotFix()
{
Id = hotfix.Id,
Date = hotfix.Date,
...
AssociatedPRs = hotfix.AssociatedPRs.Select(accociatedPr => new AssociatedPr()
{
... // etc
Note: you still don't have to fill all the fields, unless your function requirement specifically states this.
It might be confusing for users of your function to not know which fields will actually be filled and which ones will not. On the other hand: when adding items to your database they are already accustomed not to fill in all fields, for instance the primary and foreign keys.
As a solution for that not all fields are filled, some developers design an extra layer: the repository layer (using the repository pattern). For this they create classes that represent the data that people want to put into storage and want to save into the storage. Usually those people are not interested in that the data is saved in a relational database, with foreign keys and stuff. So the repository classes won't have the foreign keys
The advantage of the repository pattern is, that the repository layer hides the actual structure of your storage system. It even hides that it is a relational database. It might also be in a JSON-file. If the database changes, users of the repository layer don't have to know about this, and probably don't need to change as well.
A repository pattern also makes it easier to mock the database for unit testing: since users don't know that the data is in a relational database, for the unit test you can save the date in a JSON file, or a CSV-file or whatever.
The disadvantage is that you need to write extra classes that holds the data that is to be put into the repository, or fetched from the repository.
Whether it is wise to add this extra layer or not, depends on how often you expect your database to change layout in the future, and how good your unit tests need to be.
I know that my question is a bit confused, But let me explain in detail.
Suppose that I have person class like this.
public class Person {
public int Id {get; set;}
public string Name {get; set;}
}
and I want create a new entity, but these two classes are similarly so I would like to just inherit and add some new properties
public class Employee : Person {
public string Position {get; set;}
}
everything works fine, but I have a problem when I want to select the data from person table and add it to Employee class like this
employee = _context.Person.Select(
a => new Employee {
Name = a.Name,
Position = "Programmer"
}).ToList();
So as you can see here, I want to add the position property, but also want the previous data from person table. The problem is, I have to type the previous data from person table manually. If the person table has a lot of properties I need to type all of that to get all data. Is there anyway to get previous data without typing all of them. So in javascript it have something like
new State = {
...State,
Position : "employee"
}
Is it possible to do something like this in c#?
Having employee as an entity, you can use
var employees = _context.Employee.Include(e=>e.Person).ToList();
then you'll do it like this employees[0].Person.Name and so on.
If I understand you, you essentially want to "upgrade" an existing Person entity to an Employee entity. Unfortunately, this is not as simple or straight-forward as you would like. EF Core models inheritance via a single table with a discriminator column. That discriminator column informs what class type should actually be instantiated, when the entity is pulled from the database. Since it was saved as a Person, it will have "Person" as the value there, and EF Core will instantiate a Person instance when you retrieve it from the database.
You can then potentially downcast it to Employee, but EF Core isn't going to know what to do with this. Technically, the downcast instance will be untracked. If you attempt to track it, you'll get an error on saving as EF Core will attempt to add a new Employee with the same PK as an existing Person. Boom.
Your best bet is to map over the data from the Person instance to a new Employee instance, including all associated relationships. Then, create the Employee, causing the relationships to be moved at the database-level, before finally removing the old Person.
This will of course result in a new PK being assigned (which is why it's important to migrated the related entities), and that could potentially be problematic if you've got URLs referencing that PK or if you're simply dealing with an auto-increment PK. You'll end up leaving gaps in the keys, and could potentially even run out of keys, if you do this enough.
The only other potential alternative is to change the value of the discriminator column. This would have to be done via straight SQL, as the discriminator column is a shadow property not exposed by EF Core (i.e. there's no way to modify it via C# code). However, if you literally change the value to something like "Employee", then when you fetch it, EF will create an Employee instance, just will all the Employee-specific properties null or default. However, at that point, you can make the necessary updates and save it back.
I need to create an application that is compatible with various legacy database systems.
So, the database exists, but I still want to use code first to be independent from whatever database is used as datastore. For each deployment, I intend to create a "mapping" library containing the correct FluentAPI mappings of the entities to the database.
I'm using EF6.
I don't want code first to alter anything in the database structure automagically, so I use
Database.SetInitializer<mycontext>(null);
Now I'm stuck on following issue:
my code defines an enum Gender, which is used as a property in the Person entity
public enum Gender
{
M = 1,
F = 2
}
However, in one of the legacy databases, the values are the other way around. The table "Gender" exists, and the lookup data in that table is ID 1 = female, ID 2 = male.
The Person table has a "FK Gender ID" column.
How would I configure through Fluent API the mapping of the Gender property of my Person entity, to the Person table in the legacy database table.
modelbuilder.Entity<Person>()
.Property(c => c.Gender)
.HasColumnName("FK Gender ID") //--> and how to "inverse" these values here ?
Is this possible with Fluent API, and if not, is there a workaround ?
Thanks.
I don't think that something you are trying to do is possible. For simplicity you should consider changing your code to match what you have in your database. If you cannot do that here is what you can do. Define an enum type (something like GenderDb or something). Ideally no one should even see this enum. Then create private properties of GenderDb type and map them to database columns (I believe in EF you can map columns to private properties). Again the properties are private so that no one can see them. Then add public properties of the Gender type on the entities that have the private GenderDb properties. The public properties should be configured as not mapped/ignored. Now implement the setter and the getter of the public properties so that it converts the value accordingly (i.e. setter converts the Gender enum to GenderDb and sets the private property, getter reads the private property and converts GenderDb to Gender).
(Yes you could get away with just one enum type if you like to receive phone calls at 2 am)
I came to a conclusion that it is impossible to properly implement GetHashCode() for an NHibernate entity with an identity column. The only working solution I found is to return a constant. See below for explanation.
This, obviously, is terrible: all dictionary searches effectively become linear. Am I wrong? Is there a workaround I missed?
Explanation
Let's suppose we have an Order entity that refers to one or more Product entities like this:
class Product
{
public virtual int Id { get; set; } // auto; assigned by the database upon insertion
public virtual string Name { get; set; }
public virtual Order Order { get; set; } // foreign key into the Orders table
}
"Id" is what is called an IDENTITY column in SQL Server terms: an integer key that is automatically generated by the database when the record is inserted.
Now, what options do I have for implementing Product.GetHashCode()? I can base it on
Id value
Name value
Identity of the product object (default behavior)
Each of these ideas does not work. If I base my hash code on Id, it will change when the object is inserted into a database. The following was experimentally shown to break, at least in the presence of NHibernate.SetForNet4:
/* add product to order */
var product = new Product { Name = "Sushi" }; // Id is zero
order.Products.Add(product); // GetHashCode() is calculated based on Id of zero
session.SaveOrUpdate(order);
// product.Id is now changed to an automatically generated value from DB
// product.GetHashCode() value changes accordingly
// order.Products collection does not like it; it assumes GetHashCode() does not change
bool isAdded = order.Products.Contains(product);
// isAdded is false;
// the collection is looking up the product by its new hash code and not finding it
Basing GetHashCode() on the object identity (i.e. leaving Product with default implementation) does not work well either, it was covered on StackOverflow before. Basing GetHashCode() on Name is obviously not a good idea if Name is mutable.
So, what is left? The only thing that worked for me was
class Product
{
...
override public GetHashCode() { return 42; }
}
Thanks for reading through this long quesiton.
Do you have any ideas on how to make it better?
PS. Please keep in mind that this is an NHibernate question, not collections question. The collection type and the order of operations are not arbitrary. They are tied to the way NHibernate works. For instance, I cannot simply make Order.Products to be something like IList. It will have important implications such as requiring an index/order column, etc.
I would base the hashcode (and equality, obviously) on the Id, that's the right thing to do. Your problem stems from the fact that you modify Id while the object is in the Dictionary. Objects should be immutable in terms of hashcode and equality while they are inside a dictionary or hashset.
You have two options -
Don't populate dictionaries or hashsets before storing items in DB
Before saving an object to the DB, remove it from the dictionaries. Save it to the DB and then add it again to the dictionary.
Update
The problem can also be solved by using others mappings
You can use a bag mapping - it will be mapped to an IList and should work OK with you. No need to use HashSets or Dictionaries.
If the DB schema is under your control, you may wish to consider adding an index column and making the relation ordered. This will again be mapped to an IList but will have a List mapping.
There are differences in performance, depending on your mappings and scenarios (see http://nhibernate.info/doc/nh/en/#performance-collections-mostefficientupdate)
The title is awful, I know, so here's the long version:
I need to store variable data in a database column -- mostly key-value pairs, but both the number of items and the names of those items are completely unknown at run-time. My initial thinking is to "pickle" the data (a dictionary) into something like a JSON string, which can be stored in the database. When I retrieve the item, I would convert ("unpickle") the JSON string into a normal C# dictionary. Obviously, I don't want anyone directly interacting with the JSON string, though, so the actual property corresponding to the database column should be private, and I would have a public getter and setter that would not be mapped.
private string Data { get; set; }
public Dictionary<string, object> DataDictionary
{
get
{
return Deserialize(Data);
}
set
{
Data = Serialize(value);
}
}
The problem of course is that EF will refuse to map the private Data property and actually want to map the public DataDictionary property, which shouldn't be mapped. There's ways around this, I believe, but the complexity that this starts generating makes me think I'm going down a rabbit hole I shouldn't. Is my thinking reasonable here, or should I go a different direction?
I suppose I could simply create a one-to-many relationship with a basic table that just consisted of key and value columns, but that feels hackneyed. However, perhaps, that actually is a better route to go given the inherent limitations of EF?
Have you tried using Complex Types? You should be able to achieve your goal by creating a complex type of string on the EF Model.
Start by adding a complex type to the Model. On the complex type, add a scalar property of type string that will hold the data.
You can then create a property of this complex type on the entity that will hold the data.
The code generator should add a partial class that provides access to the properties for the complex type. Create a new partial class of the complex type and add in the serialisation/de-serialisation code as a property as in your question. You can then use this property to access the data.
The complex type in this example is essentially acting as a wrapper for a value that allows you to store the data value to storage.