Add property to POCO class at runtime - c#

I selected ServiceStack OrmLite for my project which is a pure Data-Oriented application. I am willing to allow the end user to create his own Object Types defined in an XML format that will be used to generate classes at runtime using CodeDOM.
I will be also defining some "system" objects required by the application (i.e. User) but I cannot foresee all the properties the end user will use and therefore I am looking for a way to allow extending the classes I create in design time. Sample bellow
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
}
The end user wants to have an Email and an Address. He should be able to add the 2 properties to the upper class and the whole class will be (which still can be used by OrmLite, since it allows overwriting :
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
public String Email{ get; set; }
public String Address { get; set; }
}
I know that there might be a risk of doing so to crash the system (if the class is already instantiated) so I am looking for the best way to avoid this issue and mimic the need I have.

It seems that there are two parts to what you're doing here. You need to create types dynamically to support the additional properties. You also need to ensure that you never end up with duplicate types in your AppDomain, i.e. two different definitions of User.
Runtime type generation
The various suggestions already given handle how to create the types. In one project, we had something similar. We created a base class that had the core properties and a dictionary to store the 'extension' properties. Then we used Reflection.Emit to create a derived type that had the desired properties. Each property definition simply read from or wrote to the dictionary in the base class. Since Reflection.Emit entails writing low-level IL code, it seems complex at first. We wrote some sample derived classes in another class library and compiled them. These were examples of what we'd actually need to achieve at runtime. Then we used ildasm.exe to see what code the compiler produced. This made it quite easy to work out how we could generate the same code at runtime.
Avoiding namespace collisions
Your second challenge is to avoid having duplicate type names. We appended a guid (with invalid characters removed) to the name of each generated type to make sure this never happened. Easy fix, though I don't know whether you could get away with that with your ORM.
If this is server code, you also need to consider the fact that assemblies are never unloaded in .NET. So if you're repeatedly generating new types at runtime, your process will continue to grow. The same will happen in client code, but this may be less of an issue if you don't expect the process to run for an extended period of time.
I said assemblies are not unloaded; however, you can unload an entire AppDomain. So if this is server code you could have the entire operation run in its own appdomain, then tear it down afterwards to ensure that the dynamically created types are unloaded.

Check out the ExpandoObject, which provides dynamic language support for doing something like this. You can use it to add additional properties to your POCO's at runtime. Here's a link on using .NET's DLR features: http://msdn.microsoft.com/en-us/library/system.dynamic.expandoobject%28v=vs.100%29.aspx

Why not use a key value pair for all its properties, or at least the dynamic ones?
http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx
You can do it the way you're describing with Reflection but it will take a performance hit, this way will allow removal of properties also.

The project I'm currently working on has a similar requirement. We have a system already in production and had a client request addition fields.
We solved this by simply adding a CustomFields property to our model.
public class Model: IHasId<Guid>
{
[PrimaryKey]
[Index(Unique = true)]
public Guid Id { get; set; }
// Other Fields...
/// <summary>
/// A store of extra fields not required by the data model.
/// </summary>
public Dictionary<string, object> CustomFields { get; set; }
}
We've been using this for a few weeks with no issues.
An additional benefit we found from this was that each row could have its own custom fields so we could handle them on a per record basis instead of requiring them for every record.

Related

Updating DDD Aggregates with Collections

So, I've got an aggregate( Project ) that has a collection of entities (ProjectVariables) in it. The variables do not have Ids on them because they have no identity outside of the Project Aggregate Root.
public class Project
{
public Guid Id { get; set; }
public string Name { get; set; }
public List<ProjectVariable> ProjectVariables { get; set; }
}
public class ProjectVariable
{
public string Key { get; set; }
public string Value { get; set; }
public List<string> Scopes { get; set; }
}
The user interface for the project is an Angular web app. A user visits the details for the project, and can add/remove/edit the project variables. He can change the name. No changes persist to the database until the user clicks save and the web app posts some json to the backend, which in turns passes it down to the domain.
In accordance to DDD, it's proper practice to have small, succinct methods on the Aggregate roots that make atomic changes to them. Examples in this domain could be a method Project.AddProjectVariable(projectVariable).
In order to keep this practice, that means that the front end app needs to track changes and submit them something like this:
public class SaveProjectCommand
{
public string NewName { get; set; }
public List<ProjectVariable> AddedProjectVariables { get; set; }
public List<ProjectVariable> RemovedProjectVariables { get; set; }
public List<ProjectVariable> EditedProjectVariables { get; set; }
}
I suppose it's also possible to post the now edited Project, retrieve the original Project from the repo, and diff them, but that seems a little ridiculous.
This object would get translated into Service Layer methods, which would call methods on the Aggregate root to accomplish the intended behaviors.
So, here's where my questions come...
ProjectVariables have no Id. They are transient objects. If I need to remove them, as passed in from the UI tracking changes, how do identify the ones that need to be removed on the Aggregate? Again, they have no identification. I could add surrogate Ids to the ProjectVariables entity, but that seems wrong and dirty.
Does change tracking in my UI seem like it's making the UI do too much?
Are there alternatives mechanisms? One thought was to just replace all of the ProjectVariables in the Project Aggregate Root every time it's saved. Wouldn't that have me adding a Project.ClearVariables() and the using Project.AddProjectVariable() to the replace them? Project.ReplaceProjectVariables(List) seems to be very "CRUDish"
Am I missing something a key component? It seems to me that DDD atomic methods don't mesh well with a pattern where you can make a number of different changes to an entity before committing it.
In accordance to DDD, it's proper practice to have small, succinct
methods on the Aggregate roots that make atomic changes to them.
I wouldn't phrase it that way. The methods should, as much as possible, reflect cohesive operations that have a domain meaning and correspond with a verb or noun in the ubiquitous language. But the state transitions that happen as a consequence are not necessarily small, they can change vast swaths of Aggregate data.
I agree that it is not always feasible though. Sometimes, you'll just want to change some entities field by field. If it happens too much, maybe it's time to consider changing from a rich domain model approach to a CRUD one.
ProjectVariables have no Id. They are transient objects.
So they are probably Value Objects instead of Entities.
You usually don't modify Value Objects but replace them (especially if they're immutable). Project.ReplaceProjectVariables(List) or some equivalent is probably your best option here. I don't see it as being too CRUDish. Pure CRUD here would mean that you only have a setter on the Variables property and not even allowed to create a method and name it as you want.

Exporting EF Entity to Excel/PDF and How to Exclude Attributes without violating SRP?

I am working with Entity Framework as my ORM for a project at work, and I need to be able to write only some of the values of each entity to an existing Excel template.
The data is required to be formatted as Excel Tables so that the end user can reference the information by using formulas like "=AVG(People_Table[Age])". (note, this is just a contrived example for a simplicity). There is also a requirement to export the values to PDF as well.
I've decide that reflection is the way to go to export the information in the least painful manner possible. The problem now, however, is I want to exclude certain properties from being written to the spreadsheet. And I also might want to write the properties in a certain order and specify a display format.
One way I could do this is with defining specific Data Attributes on the properties. I liked this answer on ignoring specific attributes: Exclude property from getType().GetProperties(). So a possible solution could be:
// class I want to export
public class PersonEntity {
[SkipAttribute] // per solution in the referenced answer
public int PersonId { get; set; }
[SkipAttribute]
public int ForeignKeyId { get; set; }
[Display(Order = 3)]
public int Age { get; set; }
[Display(Name="First Name", Order = 1)]
public string FirstName { get; set; }
[Display(Name="Last Name", Order = 2)]
public string LastName { get; set; }
/* additional properties remove for brevity */
}
The Problem I see with the above solution is that this entity class is now doing two things: One, proving a mapping between EF and the Database which is it's primary function, and two providing information on how to consume the class for exporting to Excel. I see this as getting messy and leading to confusion because it (possibly?) violates SRP. And, also, I only need the SkipAttribute when exporting to Excel, most of the time I will just ignore this attribute.
An alternative solution that I see could be to create a separate set of classes that only contains the needed properties and to use this for exporting to Excel, and then using a tool like AutoMapper to map from EF Person to this class.
So, the export class would be:
public class PersonExportModel {
[Display(Name="First Name")]
public string FirstName { get; set; }
[Display(Name="Last Name")]
public string LastName { get; set; }
public int Age { get; set; }
/* additional properties removed for brevity */
}
And I would just use reflection to dump the values out to the specified format using ClosedXML or a PDF rendering library like ITextSharp.
Concern with the above solution is that this is going to end up with a lot of extra code just to ignore a few unwanted properties (mostly PK's, FK's, and some complex relationship properties). I am also at the issue any updates to the EF class, like removing a property, will require me to also go through the other classes and remove the corresponding properties. But I like this solution because there is less confusion to me about what data is needed for exporting to Excel.
So I'm stuck between either bloating my EF class to tell how it should be exported or creating other ExportModels that are tightly coupled to the EF class and would be a pain to update if the underlying model changes. And the whole mapping between classes is a real pain, which can be alleviated with AutoMapper. This comes with, however, it's own set of problems with obfuscated mapping and performance penalties. I could live with these "problems" if it means I do not have to manually map between the two classes.
I've thought about farming the work out to a SSRS but I need to ability to write the data to specific existing workbooks which I understand is not possible. I'd also need the ability to create named tables which also I understand is not possible out of the box with SSRS. I'd also need to create two reports because the Excel output would look much different than the PDF format. So even the SSRS would cause a lot of extra work.
Any suggestions on which solution might be best, or perhaps an alternative approach? The requirement of this project is in flux so I'm looking for a solution that will be as painless as possible to updates.

C# NHibernate - Remove all references to object on delete

I have two objects. One, the parent, references a Locale. This locale is from a list of locales. When that locale is deleted, I want it to clean up any references to itself from all referencing types (setting the relevant value to null).
Right now, I have a system that walks across all entities that NHibernate is mapping and, by using their class metadata, determines which types reference the locale type. Then, I build a query (using ICriteria) for that referencing type where the property of type Locale equals the locale's Id that I'm trying to delete. Any objects that come back, I set that property to null and then update them.
Question: Is there a better way - hopefully using something built into NHibernate - to instruct an object to remove all references to itself on delete?
Objects:
public class Parent
{
public virtual Guid Id { get; set; }
public virtual Locale Loc { get; set; }
}
public class Locale
{
public virtual Guid Id { get; set; }
}
Mappings:
public class ParentMapping : ClassMap<Parent>
{
Id(x => x.Id).GeneratedBy.Guid();
References(x => x.Loc).Nullable();
}
public class LocaleMapping : ClassMap<Locale>
{
Id(x => x.Id).GeneratedBy.Guid();
}
As requested, here's how I wound up dealing with this problem. I actually used a suggestion originally given by #Fran to come up with a solution.
Solution
This solution is very specific to my type of application and involves using a number of parts of the application working together to achieve my desired result. Specifically, my application is a RESTful web service, powered by WCF, JSON.NET, and NHibernate.
First, I added a reference to all parents in the locale and used a HasMany mapping, so that the locale knew all of the parents that reference it:
public virtual IList<Parent> Parents { get; set; }
and
HasMany(x => x.Parents);
It's also important to point out here that I use lazy loading throughout the application.
While this allowed me to easily delete the locale by using the proper cascade behaviors, this posed a problem in loading/GET scenarios in that when I passed the locale into JSON.NET (on its way out the door to the client), JSON.NET would walk the Parents collection, and serialize the whole thing. Obviously, this is undesired as we're feeding the client much more than they asked for. This is the problem I alluded to in my comment in the OP.
As #Fran mentioned, I could use projections; however, all of my reference lists are accessed through a common endpoint in order to abstract their CRUD operations and reduce the amount of repeated code: all of my reference lists implement an abstract class called ReferenceListBase. Anyways, I wanted a solution in which the implementing class itself was able to decide how much of it should be sent to the client (serialized).
My solution was to put a [JsonIgnore] attribute on the Parents collection, which, in conjuction with lazy loading, means that JSON.NET never looks at the property and therefore, the relationship never gets loaded.
This solution has always kind of felt like a hack, but it has achieved all of the results I want and made adding new reference lists very easy. I hope this helps you; if it doesn't, post a new question, link it here, and I'll try to help you out. :)

What design pattern uses this approach for naming POCO classes

Ive got a library that handles the typical add/edit/update methods for an application. Im wondering what design pattern calls for naming the POCO classes that bundle the data for is sent back and forth. For example, one class might be similar to another, but needs to include a few other members for being sent back to the application vs the data that is sent in to be saved.
For example, this might be a POCO class that I would use to populate before in a library method before sending back to the app to be displayed/consumed.
public class CorporateDeptAssignmentInfo
{
public int Id { get; set; }
public int DivisionKey { get; set; }
public int DeptKey { get; set; }
public int Count { get; set; }
public string DeptName { get; set; }
public DateTime Corp_dept_from_date { get; set; }
public DateTime Corp_dept_to_date { get; set; }
}
On the other hand, if Im adding a new record, I might not want to populate all members.
I could either (a) make some members nullable or (b) create a new POCO class with a slightly different name for use with calling an update/add library method.
Are there any design patterns that mention the use of poco classes in either of the above ways?
It's either an Adapter, Decorator, or Facade. That's where I think it's heading anyway. You are looking for a way to present something, with modifications/simplification.
I don't know any specific design patterns for this scenario except the Data Transfer Object, but if your domain object actually does allow nullable values, why are your pocos not designed in the same way?
Personally I would make two POCO classes if the loading and adding / updating process takes different data. Both of these classes usually have an ID property that resort to the same domain object. Sometimes it's also useful if one of these classes encapsulate the other POCO, but I don't have this kind of of situation often in my code.
If you have further questions, feel free to ask.

Dynamic class creation

we have a data-layer which contains classes generated by outputs (tables/views/procs/functions) from database. The tables in database are normalized and are designed similar to OOP design ( table for "invoice" has 1:1 relation to table for "document", table for "invoice-item" has 1:1 relation to table for "document-item", etc...". All access to/from databaes is by stored procedures (for simple tables too).
Typical clas looks like (shortly):
public class DocumentItem {
public Guid? ItemID { get; set; }
public Guid? IDDocument { get; set; }
public DateTime? LastChange { get; set; }
}
public class InvoiceItem : DocumentItem {
public Guid? IDProduct { get; set; }
public decimal? Price { get; set; }
}
The problem is, the database tables has relations similar to multiple inheritance in OOP. Now we do a new class for every database output. But every database outputs are combination of "pure" tables in database.
The ideal solution would be (IMHO) tranform classes to interface, use the multiple implementation of interfaces, and then automaticly implement the members (this "table-classes" has only properties, and body of properties are always same).
For example:
public interface IItem {
Guid? ItemID { get; set; }
DateTime? LastChange { get; set; }
}
public interface IDocumentItem : IItem {
Guid? IDDocument { get; set; }
}
public interface IItemWithProduct : IItem {
Guid? IDProduct { get; set; }
}
public interface IItemWithRank : IItem {
string Rank { get; set; }
}
public interface IItemWithPrice : IItem {
decimal? Price { get; set; }
}
// example of "final" item interface
public interface IStorageItem : IDocumentItem, IItemWithProduct, IItemWithRank { }
// example of "final" item interface
public interface IInvoiceItem : IDocumentItem, IItemWithProduct, IItemWithPrice { }
// the result should be a object of class which implements "IInvoiceItem"
object myInvoiceItem = SomeMagicClass.CreateClassFromInterface( typeof( IInvoiceItem ) );
The database contains hunderts of tables and the whole solution is composed from dynamicly loaded modules (100+ modules).
What do you think, is the best way, how to deal with it?
EDIT:
Using partial classes is good tip, bud in our solution can not be used, because "IDocumentItem" and "IItemWithPrice" (for example) are in different assemblies.
Now, if we make change in "DocumentItem" table, we must go and re-generate source code in all dependent assemblies. There is almost no reuse (because can not use multiple inheritance). Its quite time consuming, if there are dozens of dependent assemblies.
I think it is a bad idea to automatically generate your domain model from your database schema.
So, you're really looking for some kind of mix-in technology. Of course, I have to ask why you aren't using LINQ to Entity Framework or NHibernate. O/RMs handle these problems by mapping the relational model into usable data structures that have APIs to support all of the transactions that you'll need to manipulate data in the database. But I digress.
If you are really looking for a mix-in technology to do dynamic code generation, check out Cecil at the Mono Project. It's a way better place to start than trying to use Reflection.Emit to build dynamic classes. There are other dynamic code generators out there but you may want to start with Cecil since the documentation is pretty good.
If you wish to continue auto-generating from the database and want to model multiple inheritance, then I think you have the right idea: Alter the tool to spit out interfaces with multiple inheritance, plus X num implementations.
You indicated elsewhere that a convention for inheritance vs. aggregation is enforced, and (as I understand) you know exactly how the resulting interfaces and classes should look. I understand that business rules are implemented elsewhere (maybe in a business rules engine?), so regenerating the classes should not require changes to dependent code, unless you want to take advantage of those changes, or existing properties have been altered or removed.
But you won't be done. Your classes will still have id's of related entities. If you want to make things easier for client code, you should have references to related entities (not caring about the related entity's ID), like this:
public class Person{
public Guid? PersonID { get; set; }
public Person Parent { get; set; }
}
That would make things easier on the client. When you think about it, going from ID's to references is work you have to do anyway; it's better to do it once in the middle tier than to let the client do it N number of times. Plus this makes your code less database-dependent.
So above all else, I recommend writing an OO wrapper for the auto-generated classes. You would program against this OO wrapper for almost everything; let only the data access layer interact with the auto-generated classes. Sure, you can't reuse inheritance metadata in the database (specified via conventions, I assume?), but at least you won't be carving a new path.
By contrast, what you have now looks like an anemic data model or worse.
The scenario is unclear to me.
If the code is generated, you don't need any magic: add some metadata to your database objects (e.g. Extended Properties in SQL Server) that flags the "basic" interfaces, and modify your generating template/tool to consider the flags.
If the question is about multiple inheritance, you are out of luck with .Net.
If the code is generated, you may also take advantage of partial classes and methods (are you using .Net 3.5?) to produce code in different source files.
If you need to generate code at run-time there are many techniques, not least ORM tools.
Now may you be a bit more explicit of your design context?

Categories

Resources