Currently I am using EF 5.0.0 with EF Power Tools 3. I am trying to do reverse engineering from an existing database with reverse engineering code first (Power Tools 3). The sample of doing this can be found at this MSDN Link.
The only problem of doing this is the name of my database objects. My database objects using small words and underscore as space, E.g: item_cart. However I don't want that kind of naming standard to be passed to my C# application.
Then I tried to tweak the template to convert each table/field name to follow the application naming standards. This is the current conversion tool I have done till now.
public static string SqlNameToEntityName(string name)
{
StringBuilder entityName = new StringBuilder();
if(string.IsNullOrEmpty(name)) return name;
else if (!name.Contains('_'))
{
return name[0].ToString().ToUpper() + name.Substring(1);
}
string rippedName = name;
while(rippedName[0] == '_')
{
rippedName = rippedName.Substring(1);
}
do{
entityName.Append( CustomConvention.UpperFirstChar( rippedName.Substring(0, rippedName.IndexOf('_')) ) );
rippedName = rippedName.Substring(rippedName.IndexOf('_')+1);
}while(rippedName.Contains('_'));
entityName.Append( CustomConvention.UpperFirstChar(rippedName) );
return entityName.ToString();
}
public static string UpperFirstChar(string name)
{
if(string.IsNullOrEmpty(name)) return "";
return name[0].ToString().ToUpper() + name.Substring(1, name.Length-1).ToLower();
}
Then in Entity.tt, I have modified the template using this static method. columnName = CustomConvention.SqlNameToEntityName(columnName);. Using this approach, I can convert the table names from item_cart to ItemCart well enough (not tested yet though).
Lastly, here is my questions:
Currently I cannot change the .cs file name to following the naming convention. So it still stay as item_cart.cs
Will my approach make problems in the future?
Is there any more better (standard/cleaner) way to doing this?
Beside from this reverse engineering, what is the best (fastest and cleanest) way to map tables (maybe views and procedures) to entities?
Possible related question: Resolving naming convention conflict between entities in EF4 and our database standards?
Related
I'd like to list out the Quote Lines of a Quote, while keeping the interface generic enough to be able to display other entities similarly.
In order to do this, I'm trying to make use of the PrimaryNameAttribute found in the EntityMetadata. Works great for most entities, but for QuoteDetail, the PrimaryNameAttribute is "productidname" - an attribute that doesn't actually exist as part of the QuoteDetail entity.
This field, "productidname", is also the PrimaryNameAttribute for other entities (OpportunityProduct, SalesOrderDetail), and in those cases the field is also missing.
What gives? I feel like I've searched all of Google and it doesn't seem anyone has run into this issue before, so, maybe I'm missing something simple.
Here's the QuoteDetail entity page off of MSDN that shows what I'm talking about: https://msdn.microsoft.com/en-us/library/mt607959.aspx Notice that the PrimaryNameAttribute isn't listed anywhere else on the page.
QuoteDetail (like SalesOrderDetail and OpportunityProduct) is special in that it can be either relation entity to a Catalog Product item or Write-In item and gets its "name" from either productidname (product from catalog) or productdescription (write-in product). You will need to JOIN the related Product to get its name.
If you are about to develop a generic solution/interface, this is a problem you need to tackle for every Lookup attribute.
If you only ever need the PrimaryNameAttribute from records connected in Lookups, you can retrieve the name from the FormattedValues collection of your "main" entity:
string productname = quotedetail.FormattedValues["productid"];
yet better/safer:
string productname;
quotedetail.FormattedValues.TryGetValue("productid", out productname);
Handling attributes of an arbitrary entity can look like this:
foreach (var key in record.Attributes.Keys)
{
if (record.FormattedValues.ContainsKey(key))
{
string formattedvalue;
if (record.FormattedValues.TryGetValue(key, out formattedvalue))
{
Console.WriteLine(formattedvalue); // use formattedvalue string
}
continue; // skip to next field when found in formatted values
}
object attributevalue;
record.Attributes.TryGetValue(key, out attributevalue);
object actualvalue;
string actualtext = string.Empty;
// handle AliasedValue fields from JOINed/LinkEntities
if (attributevalue.GetType().Name == "AliasedValue")
{
actualvalue = ((AliasedValue)attributevalue).Value;
}
else
{
actualvalue = attributevalue;
}
switch (actualvalue.GetType().Name)
{
case "EntityReference":
actualtext = ((EntityReference)actualvalue).Name; // this will catch Lookup values not contained in FormattedValues when you just created them
break;
case "DateTime":
actualtext = string.Format("{0:dd.MM.yyyy}", ((DateTime)actualvalue).ToLocalTime()); // ... any other dateTime format you'd like
break;
case "Guid":
actualtext = string.Format("{0:D}", actualvalue); // Entity Primary key
break;
default:
actualtext = (string)actualvalue; // anything else
break;
}
Console.WriteLine(actualtext);
}
You will still have to take care of newly assigned OptionSetValue and Money attributes (similar to EntityReference ones) because those would usually pulled from FormattedValues.
Since this example rather deals with existing crm data, you need to be aware of the pitfall that your entity likely will not include all attributes, so instead of iterating .Attributes.Keys you may want to go over a predefined collection of attribute names.
My personal strategy is usually to create a lightweight ORM to map between typed objects and CRM entities but this would not fit your requirement of a generic interface.
For cases like yours where it's either this or that attribute I put syntactical sugar to work:
string pn = qd.GetAttributeValue<string>("productdesription") ?? (qd.GetAttributeValue<EntityReference>("productid") ?? new EntityReference { Name = string.Empty }).Name;
Try to get the Write-In product name; if it is null, try to get the Lookup name and if this is null, get an empty string from a fake EntityReference.
This allows rather friction-less coding and solves the "either this or that attribute" nicely.
After looking through various Lua interpreters for C#, it seems that only one is truly pure c# - MoonSharp. LuaInterpreter (defunct from 2009) which later became NLua depends on one of two other c# librariers KeraLua or another lib, and requires a customized lua52.dll (you can not use the one from lua.org, and). They have a bug report which is closed that says look at the readme for the download location of their customized lua52.dll, however it is absent. You are forced to download these libs from various sources and pray they work together in addition have a multi-file distribution which may have/cause compatibility issues with other programs due to several lua52.dll variations on the end users computer (assuming they will use more than just your program).
The one shining beacon of light on NLua is it's apparently popularity, however the project has not received any significant update in several years. MoonSharp on the other hand appears to be completely self-contained, yet is lacking in documentation for common tasks such as loading a table that was built with lua and working with it.
I have come up with the following code based on the singular example they provided on Git, and then duplicated on their site at moonsharp.org (whichever came first, i am unsure, but having 1 example is not sufficient) :
using System;
using System.IO;
using MoonSharp.Interpreter;
class Foo {
function Bar( string accountName, string warcraftPath ) {
string datastore = Path.Combine(warcraftPath, "WTF", "Account", accountName, "SavedVariables", "DataStore_Containers.lua";
DynValue table = Script.RunString( File.ReadAllText( datastore ) );
Console.WriteLine( table.Table.Keys.Count().ToString() );
}
}
Results in the following (code in picture is slightly different as I adjusted the pasted code here for cleanliness and to make it easier for you to reproduce the problem using the table data in the pastebin link below.)
The table I am trying to read looks like the following (simplified had to paste on pastebin due to the size exceeding 30,000 characters):
World of Warcraft - Datastore_Containers Lua table sample data
I sort of have something sort of working, it's a bit hackish, but doesn't seem to be away to loop through the values or explicitly get the subtables / values or key of the value.
Script s = new Script(CoreModules.Preset_Complete);
// hacked by appending ' return DataStore_ContainersDB ' to return the table as DoString seems to only work to run a function expecting a result to be returned.
DynValue dv = s.DoString(luaTable + "\nreturn DataStore_ContainersDB;");
Table t = dv.Table;
foreach(var v in t.Keys)
{
Console.WriteLine( v.ToPrintString() );
}
The problem is that there doesn't seem to be any way for me to enter the sub-table result sets or to explicitly access those like t["global"] or t.global.
Managed to hack and slash my way through this and come up with a working solution although it is fairly rudimentary (possible someone could take this concept and make accessing of the sub data more reasonable:
Script s = new Script(CoreModules.Preset_Complete);
DynValue dv = s.DoString(luaTable + "\nreturn DataStore_ContainersDB;");
Table t = dv.Table;
Table global;
global = t.Get("global").ToObject<Table>().Get("Characters").ToObject<Table>();
foreach (var key in global.Keys)
{
Console.WriteLine( key.ToString() );
}
The library MoonSharp appears to require and depend heavily upon the Script class which is the premise by which all other methods operate. The DoString method requires a return result or the DynValue will always be void/null . DynValue appears to be the base global handler for the entire Lua process which can handle methods (aka, that lua string could contain several methods which DynValue would expose and allow them to be called in C# returning the response as other DynValue's)
So if you wish to load a lua file that ONLY contains date in Lua's table format, you MUST append a return with the table name as the last line. This is why you see :
"\nreturn DataStore_ContainersDB;"
... as the table name is called "DataStore_ContainersDB"
Next, the result must be loaded into a fresh Table object, as DynValue is not an actual table but a class construct to hold all the various formats available (methods, tables, etc).
After it is in a Table format, you can now work with it by calling the key/value pair by the key name, number, or DynValue. In my case, since I know the original key names, I call straight through to the Table where the key names exist which I do not know and would like to work with.
Table.Get( Key )
Since this returns a DynValue, we must then convert/load the object as a table again which is made convenient using the .ToObject<> method.
The foreach loop I supplied then loops through the keys available in the sub-table located at : global > Characters > *
... which I then write the key name out to the console using key.ToString()
If there is other sub-tables, in this example (as there are), you can traverse to unknown ones using the same concept in the foreach loop by expanding on it like this :
foreach (var key in global.Keys)
{
if(IsTable(global.Get(key.String)))
{
Console.WriteLine("-------" + key.ToPrintString() + "-------");
Table characterData = global.Get(key.String).ToObject<Table>();
foreach (var characterDataField in characterData.Keys)
{
if( !IsTable(characterData.Get(characterDataField.String)))
{
Console.WriteLine(string.Format("{0} = {1}", characterDataField.ToPrintString(), characterData.Get(characterDataField.String).ToPrintString()));
}
else
{
Console.WriteLine(string.Format("{0} = {1}", characterDataField.ToPrintString(), "Table[]"));
}
}
Console.WriteLine("");
}
}
... and here is the method I wrote to quickly check if the data is a table or not . This is the IsTable() method used in the above foreach example.
private static bool IsTable(DynValue table)
{
switch (table.Type)
{
case DataType.Table:
return true;
case DataType.Boolean:
case DataType.ClrFunction:
case DataType.Function:
case DataType.Nil:
case DataType.Number:
case DataType.String:
case DataType.TailCallRequest:
case DataType.Thread:
case DataType.Tuple:
case DataType.UserData:
case DataType.Void:
case DataType.YieldRequest:
break;
}
return false;
}
I have done what I could to make this workable, however, as stated before, I do see room for improving the recursion of this. Checking the data type on every subobject, and then loading it just feels very redundant and seems like this could be simplified.
I am open to other solutions on this question, ideally in the form of some enhancement that would make this not so clunky to use.
For dealing with Tables within Tables, which is my preferred way of doing things. I came up with this.
Script s = new Script();
s.DoString(luaCode);
Table tableData = s.Globals[rootTableIndex] as Table;
for (int i = 1; i < tableData.Length + 1; i++) {
Table subTable = tableData.Get(i).Table;
//Do cool stuff here with the data
}
Granted this requires you to know the index of the Global rootTable.
For my use of this I do the following (still testing out things)
string luaCode = File.ReadAllText(Path.Combine(weaponDataPath, "rifles.Lua"));
Script script = new Script();
script.DoString(luaCode);
Gun rifle = new Gun();
Table rifleData = script.Globals["rifles"] as Table;
for (int i = 1; i < rifleData.Length + 1; i++) {
Table rifleTable = rifleData.Get(i).Table;
rifle.Name = rifleTable.Get("Name").String;
rifle.BaseDamage = (int)rifleTable.Get("BaseDamage").Number;
rifle.RoundsPerMinute = (int)rifleTable.Get("RoundsPerMinute").Number;
rifle.MaxAmmoCapacity = (int)rifleTable.Get("MaxAmmoCapacity").Number;
rifle.Caliber = rifleTable.Get("Caliber").String;
rifle.WeaponType = "RIFLE";
RiflePrototypes.Add(rifle.Name, rifle);
}
This requires some assumptions about the Tables and how the values are Named, but if you are using this for object member assignment I don't see why you would care about elements in the table that are not part of the object which you define with the assignment type.Member = table.Get(member equivalent index).member type
`Hi,
Can somebody please give me a pointer on this? I have 8 servers each with 8 databases which look identical exept server/database name. We are talking thousands of tables.
I create my data contexts with sqlmetal.exe
After creating my data contexts, I import them into the application and then I run comparison scripts over the databases to compare results.
My problem is dynamically switching between data contexts.
Datacontext.DAL.DUK1 duk1sdi = new Datacontext.DAL.DUK1(connectionString);
Datacontext.DAL.DUK3 duk3sdi = new Datacontext.DAL.DUK3(connectionString);
string fromOne = runQuery(duk1sdi);
string fromThree = runQuery(duk3sdi);
public static string runQuery(DataContext duk)
{
var query =
from result in duk.TableA
select result.Total;
string returnString = query;
return returnString;
}
I have no problem with the query running when the duk is predefined, however how do I define and pass the datacontext to the function?
The error I get is:
Error 1 'System.Data.Linq.DataContext' does not contain a definition
for 'TableA' and no extension method 'TableA' accepting a first
argument of type 'System.Data.Linq.DataContext' could be found (are
you missing a using directive or an assembly reference?)
You could use the GetTable<T> method, where T is the type of the table, e.g. TableA.
public static string runQuery(DataContext duk) {
var table = duk.GetTable<TableA>();
var query = from result in table select result.Total;
...
}
However, all types of TableA will need to be the same type, strictly (I'm pretty sure).
Otherwise you would need to literally branch the logic for the handling of each context. Since you can extend your DataContext instances (in general, maybe not in your specific case) then you could have them share an interface that exposes a collection property of TableA, but you would need a higher level context wrapper to pass around then - unless you pass around the collection by altering the method signature.
You can use interfaces. Check this answer, but be sure to script the interfaces using a .tt file with the amount of tables you have.
Edit:
If you have generated contexts which you want to use interchangeably in a reusable method, you have the problem that the generated TableA classes are not reusable, since they are different types (even though the names may match, but that doesn't make them equal). Therefore you need to abstract the actual types, and one way to do this, is to use interfaces. You build your reusable method around an interface which abstracts the specific context-type and table-type. The downside is that you have to implement the interfaces on the generated contexts and tabletypes. This though is something you can solve using a .tt script.
Pseudo code:
// Define interface for table
public interface ITableA {
// ... properties
}
// Define interface for context
public interface IMyContext {
IQueryable<ITableA> TableA { get; }
}
// Extend TableA from DUK1
public partial class TableA: ITableA {
}
// Extend DUK1
public partial class Datacontext.DAL.DUK1: IMyContext {
IQueryable<ITableA> IMyContext.TableA {
get { return TableA; }
}
}
// Same for DUK3 and TableA FROM DUK3
// Finally, your code
Datacontext.DAL.DUK1 duk1sdi = new Datacontext.DAL.DUK1(connectionString);
Datacontext.DAL.DUK3 duk3sdi = new Datacontext.DAL.DUK3(connectionString);
string fromOne = runQuery(duk1sdi);
string fromThree = runQuery(duk3sdi);
public static string runQuery(IMyContext duk) {
// Note: method accepts interface, not specific context type
var query = from result in duk.TableA
select result.Total;
string returnString = query;
return returnString;
}
If your schema is identical between databases, why script the dbml for all of them? Just create one context with it's associated classes and dynamically switch out the connection string when instantiating the context.
var duk1sdi = new Datacontext.DAL.DUK1(connectionString1);
var duk3sdi = new Datacontext.DAL.DUK1(connectionString2);
Thanks, guys, I think I found the simplist solution for me based a bit of both your answers and by RTFM (Programming Microsoft Linq in Microsoft .NET Framework 4 by Paulo Pialorsi and Marco Russo)
In this way I don't have to use the large DBML files. It is a shame because I'm going to have to create hundreds of tables in this way, but I can now switch between connection strings on the fly.
First I create the table structure. (outside the program code block)
[Table(Name = "TableA")]
public class TableA
{
[Column] public int result;
}
Then I define the table for use:
Table<TableA> TableA = dc.GetTable<TableA>();
And then I can query from it:
var query =
from result in TableA
select TableA.result;
I am trying to write a GenericEFRepository which will be used by other Repositories. I have a Save method as below.
public virtual void Save(T entity) // where T : class, IEntity, new() And IEntity enforces long Id { get; set; }
{
var entry = _dbContext.Entry(entity);
if (entry.State != EntityState.Detached)
return; // context already knows about entity, don't do anything
if (entity.Id < 1)
{
_dbSet.Add(entity);
return;
}
var attachedEntity = _dbSet.Local.SingleOrDefault(e => e.Id == entity.Id);
if (attachedEntity != null)
_dbContext.Entry(attachedEntity).State = EntityState.Detached;
entry.State = EntityState.Modified;
}
You can find the problem in comments of below code
using (var uow = ObjectFactory.GetInstance<IUnitOfWork>()) // uow is implemented like EFUnitOfWork which gives the DbContext instance to repositories in GetRepository
{
var userRepo = uow.GetRepository<IUserRepository>();
var user = userRepo.Get(1);
user.Name += " Updated";
userRepo.Save(user);
uow.Save(); // OK only the Name of User is Updated
}
using (var uow = ObjectFactory.GetInstance<IUnitOfWork>())
{
var userRepo = uow.GetRepository<IUserRepository>();
var user = new User
{
Id = 1,
Name = "Brand New Name"
};
userRepo.Save(user);
uow.Save();
// NOT OK
// All fields (Name, Surname, BirthDate etc.) in User are updated
// which causes unassigned fields to be cleared on db
}
The only solution I can think of is creating Entities via repository like userRepo.CreateEntity(id: 1) and repository will return an Entity which is attached to DbContext. But this seems error prone, still any developer may create an entity using new keyword.
What are your solution suggestions about this particular problem?
Note: I already know about cons and pros of using a GenericRepository and an IEntity interface. So, "Don't use a GenericRepository, don't use an IEntity, don't put a long Id in every Entity, don't do what you are trying to do" comments will not help.
Yes it is error prone but simply that is the problem with EF and repositories. You must either create entity and attach it before you set any data you want to update (Name in your case) or you must set modified state for each property you want to persist instead of whole entity (as you can imagine again developer can forget to do that).
The first solution leads to special method on your repository doing just this:
public T Create(long id) {
T entity = _dbContext.Set<T>().Create();
entity.Id = id;
_dbContext.Set<T>().Attach(entity);
return entity;
}
The second solution needs something like
public void Save(T entity, params Expression<Func<T, TProperty>>[] properties) {
...
_dbContext.Set<T>().Attach(entity);
if (properties.Length > 0) {
foreach (var propertyAccessor in properties) {
_dbContext.Entry(entity).Property(propertyAccessor).IsModified = true;
}
} else {
_dbContext.Entry(entity).State = EntityState.Modified;
}
}
and you will call it like:
userRepository(user, u => u.Name);
This is kind of a fundamental problem of this approach because you expect the repository to magically know which fields you changed and which ones you didn't. Using null as a signal for "unchanged" does not work in case null is a valid value.
You'd need to tell the repository which fields you want to have written, for example sending a string[] with the field names. Or one bool for each field. I do not think this is a good solution.
Maybe you can invert the control flow like this:
var entity = repo.Get(1);
entity.Name += "x";
repo.SaveChanges();
That would allow change tracking to work. It is closer to how EF wants to be used.
Alternative:
var entity = repo.Get(1);
entity.Name += "x";
repo.Save(entity);
While the other two answers provide good insight into how perhaps you can avoid this issue I think its worth pointing out a couple of things.
What you are trying to do (ie a proxy entity update) is extremely EF-centeric and IMO actually doesn't make sense outside of the EF context and hence it doesnt make sense that a generic repository would be expected to behave in this way.
You actually haven't even gotten the flow quite right for EF, if you attach an object with a few fields already set EF will conciser what you told it to be the current DB state unless you modify a value or set a modified flag. To do what you are attempting without a select you would normally attach an object without the name and then set the name after attaching the ID object
Your approach is normally used for performance reasons, I would suggest that by abstracting over the top of an existing framework you are almost always going to suffer some logical performance degradation. If this is a big deal maybe you shouldn't be using a repository? The more you add to your repository to cater to performance concerns the more complex and restrictive it becomes and the harder it gets to provide more than one implementation.
All that being said I do think you can handle this particular case in a generic situation.
This is one possible way you could do it
public void UpdateProperty(Expression<Func<T,bool>> selector, FunctionToSetAProperty setter/*not quite sure of the correct syntax off the top of my head*/)
{
// look in local graph for T and see if you have an already attached version
// if not attach it with your selector value set
// set the property of the setter
}
Hope this makes some sense, I'm not by my dev box atm so I cant really do a working sample.
I think this is a better approach for a generic repository as it allows you to implement this same behavior in multiple different ways, the abovc may work for EF but there will be different methods if you have an in memory repository (for example). This approach allows you to implement different implementations that fulfill the intent rather than restrict your repository to only act like EF.
In my business layer, I need many, many methods that follow the pattern:
public BusinessClass PropertyName
{
get
{
if (this.m_LocallyCachedValue == null)
{
if (this.Record == null)
{
this.m_LocallyCachedValue = new BusinessClass(
this.Database, this.PropertyId);
}
else
{
this.m_LocallyCachedValue = new BusinessClass(
this.Database, this.Record.ForeignKeyName);
}
}
return this.m_LocallyCachedValue;
}
}
I am still learning C#, and I'm trying to figure out the best way to write this pattern once and add methods to each business layer class that follow this pattern with the proper types and variable names substituted.
BusinessClass is a typename that must be substituted, and PropertyName, PropertyId, ForeignKeyName, and m_LocallyCachedValue are all variables that should be substituted for.
Are attributes usable here? Do I need reflection? How do I write the skeleton I provided in one place and then just write a line or two containing the substitution parameters and get the pattern to propagate itself?
EDIT: Modified my misleading title -- I am hoping to find a solution that doesn't involve code generation or copy/paste techniques, and rather to be able to write the skeleton of the code once in a base class in some form and have it be "instantiated" into lots of subclasses as the accessor for various properties.
EDIT: Here is my solution, as suggested but left unimplemented by the chosen answerer.
// I'll write many of these...
public BusinessClass PropertyName
{
get
{
return GetSingleRelation(ref this.m_LocallyCachedValue,
this.PropertyId, "ForeignKeyName");
}
}
// That all call this.
public TBusinessClass GetSingleRelation<TBusinessClass>(
ref TBusinessClass cachedField, int fieldId, string contextFieldName)
{
if (cachedField == null)
{
if (this.Record == null)
{
ConstructorInfo ci = typeof(TBusinessClass).GetConstructor(
new Type[] { this.Database.GetType(), typeof(int) });
cachedField = (TBusinessClass)ci.Invoke(
new object[] { this.Database, fieldId });
}
else
{
var obj = this.Record.GetType().GetProperty(objName).GetValue(
this.Record, null);
ConstructorInfo ci = typeof(TBusinessClass).GetConstructor(
new Type[] { this.Database.GetType(), obj.GetType()});
cachedField = (TBusinessClass)ci.Invoke(
new object[] { this.Database, obj });
}
}
return cachedField;
}
Check out CodeSmith. They have a free trial and it's not too expensive if you want to purchase it. I've used it and it's great for generating code based on databases (which is what I'm guessing you're doing). Once you have your template setup, you can regenerate the code at any time. You can have it read the property names right from the database schema or you can enter the values you want to use. I'm sure you could even get it to read the values from a file if you wanted to generate a whole batch of classes at once.
You could check out using T4 Templates. I am not quite sure which is "the" resource for T4, but I found a good article on it in VisualStudioMagazine.
It is free, has an easy to use syntax and is actually used by a lot of projects (e.g. Subsonic) for code generation, so you should be able to find some real-world scenarios.
You can code-gen using CodeSmith or MyGeneration or the like. You'd probably store a list of classes and properties somewhere and then pass that data to the code generator. You may want to investigate using pre-build events to re-gen those classes prior to compiling the solution.
Or, you could bake this functionality into a base class or helper method.
public BusinessClass MyProperty
{
get { return GetCached("MyProperty", "PropertyId", "FKName", "LocalValue"); }
}
I'll leave the body of GetCached() up to you, but it's basically the same as what you posted with the variables passed in as arguments.
If any of those values are the same for all properties in a class then you could of course pull them from instance variables, and only pass to GetCached() those things that vary on a per-property basis.
Bottom line: if there's a way to abstract the logic of what you're doing into a base method, so that using that logic becomes a one-liner, then that's probably the best way to go because it's easier to override when you have special cases. If you can't do that, code generation can do the grunt work for you, but you'll need to work out things like when do I re-gen, how do I regen, etc.