The title is awful, I know, so here's the long version:
I need to store variable data in a database column -- mostly key-value pairs, but both the number of items and the names of those items are completely unknown at run-time. My initial thinking is to "pickle" the data (a dictionary) into something like a JSON string, which can be stored in the database. When I retrieve the item, I would convert ("unpickle") the JSON string into a normal C# dictionary. Obviously, I don't want anyone directly interacting with the JSON string, though, so the actual property corresponding to the database column should be private, and I would have a public getter and setter that would not be mapped.
private string Data { get; set; }
public Dictionary<string, object> DataDictionary
{
get
{
return Deserialize(Data);
}
set
{
Data = Serialize(value);
}
}
The problem of course is that EF will refuse to map the private Data property and actually want to map the public DataDictionary property, which shouldn't be mapped. There's ways around this, I believe, but the complexity that this starts generating makes me think I'm going down a rabbit hole I shouldn't. Is my thinking reasonable here, or should I go a different direction?
I suppose I could simply create a one-to-many relationship with a basic table that just consisted of key and value columns, but that feels hackneyed. However, perhaps, that actually is a better route to go given the inherent limitations of EF?
Have you tried using Complex Types? You should be able to achieve your goal by creating a complex type of string on the EF Model.
Start by adding a complex type to the Model. On the complex type, add a scalar property of type string that will hold the data.
You can then create a property of this complex type on the entity that will hold the data.
The code generator should add a partial class that provides access to the properties for the complex type. Create a new partial class of the complex type and add in the serialisation/de-serialisation code as a property as in your question. You can then use this property to access the data.
The complex type in this example is essentially acting as a wrapper for a value that allows you to store the data value to storage.
Related
I have a situation where I have a type with a property of type object, eg.
public class MyType
{
public virtual object MyProp{get; get;}
}
This type will have to be:
Saved using Entity Framework to a database, as a byte[] (I have figured the serialization logic)
Transmitted through WCF (I will use the KnownType attribute)
How do I map my object property ensuring that it is converted it to a byte array for storage?
N.B: The object property will be a value type(non-complex)
I thought of creating a separate type for saving to the database e.g:
public class MyTypeEntity
{
public virtual byte[] MyProp{get; get;}
}
How do I convert/translate between the types while still able to define the relationship mappings?
Does this involve some sort of interception on saving?
The best solution I could think of without breaking my back is simply storing the serialized data in the DB.
If there was some form of property covariance in C#, maybe this would've worked easier. As far as I know, it does not exist.
If there is an elegant alternative that I can use, I would appreciate your insight.
I would recommend keeping a byte[] field on your entity; your class should really mimic the database structure as closely as possible. One way I've done something similar to this in the past is to create a separate class file (remember, your entities are partial) and add a NotMapped property to the second file. You can make the getter and setter do the conversion from object to byte[] and in your code, just always interact with the object property that EF will ignore. It's pretty painless, and EF will still track the varbinary field to the byte[] that you don't directly access.
I am currently attempting to implement a revision history screen in an MVC app. I need to be able to retrieve the names of fields which have changed in each revision using Envers. So I am following directions here: http://envers.bitbucket.org/#envers-tracking-modified-entities-revchanges
I am using the second option since we have a custom revision entity. It looks like this:
[RevisionEntity(typeof(MyRevisionListener))]
public class RevisionEntity : DefaultTrackingModifiedEntitiesRevisionEntity
{
public virtual Person User { get; set; }
}
As you can see I am inheriting from DefaultTrackingModifiedEntitiesRevisionEntity in order to make sure the class has the property to hold the modified entities' names.
Per the documentation this should create a table called RevChanges in which this information is stored with reference to the revisions table:
Envers provides a simple mechanism that creates REVCHANGES table which
stores entity names of modified persistent objects. Single record
encapsulates the revision identifier (foreign key to REVINFO table)
and a string value.
I am never seeing this table created. I tried creating such a table myself along with a related class and wiring up the mappings, but I don't see how Envers would know to put the data into that table without me configuring it somehow. I just get an exception saying that the object is different from the target type when the get method is called on the new type.
How can I get this to work?
If you use a custom revision entity, you need to map this just like you do with normal entites.
http://envers.bitbucket.org/#revisionlog
The situation is that I have a table that models an entity. This entity has a number of properties (each identified by a column in the table). The thing is that in the future I'd need to add new properties or remove some properties. The problem is how to model both the database and the corresponding code (using C#) so that when such an occasion appears it would be very easy to just "have" a new property.
In the beginning there was only one property so I had one column. I defined the corresponding property in the class, with the appropriate type and name, then created stored procedures to read it and update it. Then came the second property, quickly copy-pasted, changed name and type and a bit of SQL and there it was. Obviously this is not a suitable model going forward. By this time some of you might suggest an ORM (EF or another) because this will generate the SQL and code automatically but for now this is not an option for me.
I thought of having only one procedure for reading one property (by property name) and another one to update it (by name and value) then some general procedures for reading a bunch or all properties for an entity in the same statement. This may sound easy in C# if you consider using generics but the database doesn't know generics so it's not possible to have a strong typed solution.
I would like to have a solution that's "as strongly-typed as possible" so I don't need to do a lot of casting and parsing. I would define the available properties in code so you don't go guessing what you have available and use magic strings and the like. Then the process of adding a new property in the system would only mean adding a new column to the table and adding a new property "definition" in code (e.g. in an enum).
It sounds like you want to do this:
MyObj x = new MyObj();
x.SomeProperty = 10;
You have a table created for that, but you dont want to keep altering that table when you add
x.AnotherProperty = "Some String";
You need to normalize the table data like so:
-> BaseTable
RecordId, Col1, Col2, Col3
-> BaseTableProperty
PropertyId, Name
-> BaseTableValue
ValueId, RecordId, PropertyId, Value
Your class would look like so:
public class MyObj
{
public int Id { get; set; }
public int SomeProperty { get; set; }
public string AnotherProperty { get; set; }
}
When you create your object from your DL, you enumerate the record set. You then write code once that inspect the property as the same name as your configuration (BaseTableProperty.Name == MyObj.<PropertyName> - and then attempt the type cast to that type as you enumerate the record set.
Then, you simply add another property to your object, another record to the database in BaseTableProperty, and then you can store values for that guy in BaseTableValue.
Example:
RecordId
========
1
PropertyId Name
========== ====
1 SomeProperty
ValueId RecordId PropertyId Value
======= ======== ========== =====
1 1 1 100
You have two result sets, one for basic data, and one joined from the Property and Value tables. As you enumerate each record, you see a Name of SomeProperty - does typeof(MyObj).GetProperty("SomeProperty") exist? Yes? What it it's data type? int? Ok, then try to convert "100" to int by setting the property:
propertyInfo.SetValue(myNewObjInstance, Convert.ChangeType(dbValue, propertyInfo.PropertyType), null);
For each property.
Even if you said you cannot use them, that is what most ORM do. Depending on which one you use (or even create if it's a learning experience), they will greatly vary in complexity and performance. If you prefer a light weight ORM, check Dapper.Net. It makes use of generics as well, so you can check the code, see how it works, and create your own solution if needed.
With the new release of Entity Framework 4.1, I thought it would be a good time to learn how to utilise it in my coding. I've started off well but I seem to have hit a brick wall and I dont know what the best approach is.
My issue is when using lookup tables, I cant see how to keep my data as objects (rather than lists, anonymous types etc) when pulling in data from a lookup table.
I have looked around on Google but most of the posts I find are prior to the latest release of EF 4.1 and I am assuming that there is a better way to do it.
I have a simple 'invoice header' and 'customer' situation so I have set the mappings up as you would expect (the invoice header has the Id of the customer it relates to).
If I pull in data from only the invoice table then I get a true object that I can bind in to a datagrid and later save changes but this doesnt pull in the customer name like this, for example:
var results = from c in context.InvoiceHeaders
select c;
If I restructure the query to pull back specific columns including drilling down in to the customer table and and getting the customer name directly then I get the data I want but it's now not a type of object that I would expect (invoice object), like this:
var results = from c in context.InvoiceHeaders
select new { c.CreatedBy, c.Customer.Name };
But it now becomes an anonymous type and it seems to lose its bindings back to the database (hope I'm making sense)
So - my question is, "what is the best/official way to use lookup tables in EF 4.1" and/or "can I use lookup tables and keep my bindings"?
Please let me know if you need me to post any code but on this occasion, as it was a general question, I didnt feel I needed to.
Thanks in advance,
James
EF classes are partial so you may expand them :
public partial class InvoiceHeaders
{
public string CustomerName
{
get
{
try
{
return this.Customer.Name;
}
catch
{
return string.Empty;
}
}
private set { }
}
}
But when designing forms, data binding tools does note correctly use this expansion, so you should define a new class and use this class as data source when bind a component to your data source:
public partial class InvoiceHeadersEx : InvoiceHeaders
{
}
and in form.load event change the binding datasource:
private void Form1_Load(object sender, EventArgs e)
{
InvoiceHeadersExDataGridView.DataSource = InvoiceHeadersSource;
InvoiceHeadersBindingSource.DataSource = context.InvoiceHeaders;
}
I think the answer to this is to make sure you're using reference objects (I think that's what EF calls them) in your structure. So that an Invoice doesn't just have public int ClientId {get; set;} but also has public virtual Client Client {get; set;} This gives you a direct link to the actual client- and should still return Invoice objects.
Oh, I get the problem now. When you create an anonymous type, it's basically a new class (it has a type definition and everything). Because it's a new type, which you have control over, it's not an EF data type or linked to a data context.
You're best bet is returning the entire customer object. I appreciate this can cause performance issues when you have large objects, all I can say is, keep your objects smallish.
I want to learn how others cope with the following scenario.
This is not homework or an assignment of any kind. The example classes have been created to better illustrate my question however it does reflect a real life scenario which we would like feedback on.
We retrieve all data from the database and place it into an object. A object represents a single record and if multiple records exist in the database, we place the data into a List<> of the record object.
Lets say we have the following classes;
public class Employee
{
public bool _Modified;
public string _FirstName;
public string _LastName;
public List<Emplyee_Address> _Address;
}
public class Employee_Address
{
public bool _Modified;
public string _Address;
public string _City;
public string _State;
}
Please note that the Getters and Setters have been omitted from the classes for the sake of clarity. Before any code police accuse me of not using them, please note that have been left out for this example only.
The database has a table for Employees and another for Employee Addresses.
Conceptually, what we do is to create a List object that represents the data in the database tables. We do a deep clone of this object which we then bind to controls on the front end. We then have two objects (Orig and Final) representing data from the database.
The user then makes changes to the "Final" object by creating, modifying, deleting records. We then want to persist these changes to the database.
Obviously we want to be as elegant as possible, only editing, creating, deleting those records that require it.
We ultimately want to compare the two List objects so that we can;
See what properties have changed so that the changes can be persisted to the database.
See what properties (records) no longer exist in the second List<> so that these records can be deleted from the database.
See what new properties exist in the new List<> so that we can create these in the database.
Who wants to get the ball rolling on how we can best achieve this. Keep in mind that we also need to drill down into the Employee_Address list to check for any changes, not just the top level properties.
I hope I have made myself clear and look forward to any suggestions.
Add nullable ObjectID field to your layer's base type. Pass it to front end and back to see if particular instance persists in the database.
It also has many other uses even if you don't have any kind of Identity Map
I would do exactly the same thing .NET does in their Data classes, that is keep the record state (System.Data.DataRowState comes to mind) and all associated versions together in one object.
This way:
You can tell at a glance whether it has been modified, inserted, deleted, or is still the original record.
You can quickly find what has been changed by querying the new vs old versions, without having to dig in another collection to find the old version.
You should investigate the use of the Identity Map pattern. Coupled with Unit of Work, this allows you to maintain an object "cache" of sorts from which you can check which objects need saving to the database, and when reading, to return objects from the identity map rather than creating new objects and returning those.
Why would you want to compare two list objects? You will potentially be using a lot of memory for what is essentially duplicate data.
I'd suggest having a status property for each object that can tell you if that particular object is New, Deleted, or Changed. If you want go further than making the property an Enum, you can make it an object that contains some sort of Dictionary that contains the changes to update, though that will most likely apply only in the case of the Changed status.
After you've added such a property, it should be easy to go through your list, add the New objects, remove the Deleted objects etc.
You may want to check how the Entity Framework does this sort of thing as well.