In C#, I started using attributes to define my class. I'm writing a plugin system in which classes require some metadata to be recognized as plugins.
I don't know exactly anymore how the Visual Studio Extensions did it, but I require an opinion or design rule which tells me if I should create an attribute with many (optional) parameters or use more attributes with less parameters.
For example, the decision is about these two different code pieces:
Single attribute, many properties:
[Plugin("Image Tools", Author = "PacMani", Description = "blahblah",
Website = "http://blahblah.contoso.com/", Image = "Icon32x32.png")]
public class ImagePlugin : Plugin
{
// ...
}
versus
[Plugin("Image Tools", Author = "PacMani", Description = "blahblah")]
[Website("http://blahblah.contoso.com/")]
[Image("Icon32x32.png")]
public class ImagePlugin : Plugin
{
// ...
}
or even a version which splits up "Author" and "Description" into single attributes.
My idea was to not split thematically grouped attributes. But where do such groups start and end? The above properties are all of the group "Details" or "Descriptive information" about the plugin.
In your case I would recommend to put all these informative pieces of data as one attribute. Why? Because :
They are all connected with one concept - plugin
They are only informative
It would probably be different if there would be some logic connected with each of the property- then would consider splitting them.
There is one other scenario you might be willing to split these into separate attributes - in case you can logically separate the types of plugins. So then you could have a base PluginAttribute with only basic properties, then e.g. ImagePluginAttribute attribute which inherits from PluginAttribute and has some additional properties, etc.
But I would not divide it into separate attributes by properties (WebsiteAttribute, AuthorAttribute, VersionAttribute) in your case.
Might of misunderstood the problem ,but why not use a config file? seems more appropriate then attributes in this case no?
You can recognize the classes as plugins by their inheritance, and the rest of the metadata in those attributes seems like just meta data that should be defined as either properties in the class or config for easy access (if they change outside of the code - stuff like website).
Related
I know it is possible to pass a dictionary of compile time parameters to the dynamic filter controls by using the UIHint attribute on the models. Unfortunately this is not enough in our case.
For example, consider this model:
public class Device
{
public string Unit {get; set;}
public string Name {get; set;}
}
An 'Unit' is a segregating property in our models. One unit does not interfere with another. At the same time, multiple units coexist in the same server, and are accessed by different clients, with different needs.
Ideally, I wanted to load a different dynamic field when the unit has a certain value.
Consider the case where I have two units:
unit1: Requires no customization. Loads the [default] dynamic templates from the DynamicData\FieldTemplates folder
unit2: Clients who are in "unit2" requested a different way to show the device Name for instance, thus we create a custom folder for the unit and change the Text.ascx template there.
The structure would look like this:
Notice that there are now two 'Text' field templates. The idea is to use the one inside the 'unit2' folder whenever the instance I'm manipulating has 'unit2' as it's Unit property value. The controls in the root FieldTemplates folder would now act as a fallback mechanism for when the folders and controls do not exist for a given unit value.
These folders would be created after the project has been deployed and the units created, which shouldn't be a problem at all from what I can see. The only files that would be present in the original project are the "default" templates.
Initially I thought about creating my own FieldTemplateFactory implementation and attaching it to the MetaModel, but it doesn't seem to get access to the actual instance of the object, only it's MetaColumn. After that, I decided to take a look at how the DynamicField and DynamicControl controls were implemented, but couldn't find any extension point that would do what I want.
I've seen people do something similar, but with entire page templates. By using custom routes, one can make that work. In my case though, since we are talking Field Templates, that doesn't apply.
Is there another way I should be approaching this? Can I somehow base the decision of template loading on the value of a property inside the object? Is there some other, perhaps simpler, strategy to show different templates for different units?
you can maybe do this through the route but not the live data as the Field Template is selected before the data is bound.
If I have a set of interfaces which may have several implementations (i.e. in-memory, NHibernate, xml-based, etc.), is it wise to provide namespace hints in the class names themselves? For example:
MyDomain.Infrastructure.ISomeProvider
MyDomain.Infrastructure.ISomeOtherProvider
MyDomain.Infrastructure.IYetAnotherProvider
I might then have:
MyDomain.Infrastructure.Impl.MemoryBased.SomeProvider
MyDomain.Infrastructure.Impl.MemoryBased.SomeOtherProvider
MyDomain.Infrastructure.Impl.MemoryBased.YetAnotherProvider
MyDomain.Infrastructure.Impl.XmlFileBased.SomeProvider // etc...
MyDomain.Infrastructure.Impl.NHibernate.SomeProvider // etc...
vs.
MyDomain.Infrastructure.Impl.MemoryBased.MemoryBasedSomeProvider
MyDomain.Infrastructure.Impl.MemoryBased.MemoryBasedSomeOtherProvider
MyDomain.Infrastructure.Impl.MemoryBased.MemoryBasedYetAnotherProvider
MyDomain.Infrastructure.Impl.XmlFileBased.XmlSomeProvider // etc...
MyDomain.Infrastructure.Impl.NHibernate.NHibernateSomeProvider // etc...
In the second case, it's clear which implementation I am using anywhere in my code by the class name itself, but it seems a bit redundant to group them by namespace and then include it in the class name anyway, no?
A third option might be:
MyDomain.Infrastructure.ISomeProvider
MyDomain.Infrastructure.Impl.MemoryBasedSomeProvider
MyDomain.Infrastructure.Impl.MemoryBasedSomeOtherProvider
MyDomain.Infrastructure.Impl.MemoryBasedYetAnotherProvider
MyDomain.Infrastructure.Impl.XmlSomeProvider // etc...
MyDomain.Infrastructure.Impl.NHibernateSomeProvider // etc...
I have eliminated the redundant namespaces, but now the only way to group / organize the classes is by class name prefix. I suppose I could separate them into folders and manually adjust the namespaces in any newly created files. Are there any clear advantages for one of these styles over the others?
Good question. I'll answer it with another, how likely is it that someone will need to use multiple implementations of ISomeProvider at one time? If so, having them disambiguated simply by namespace will result in the need for some nasty fully qualified namespaces.
If not, I'd use the namespace to indicate the nature of the implementation, but share the same names throughout. Either way, the fact that your API is defined by interfaces rather than concrete implementations, means that people can interchange the implementation very easily regardless of which option you go for.
Option #1
MyDomain.Infrastructure.Impl.MemoryBased.SomeProvider
MyDomain.Infrastructure.Impl.MemoryBased.SomeOtherProvider
MyDomain.Infrastructure.Impl.MemoryBased.YetAnotherProvider
MyDomain.Infrastructure.Impl.XmlFileBased.SomeProvider // etc...
MyDomain.Infrastructure.Impl.NHibernate.SomeProvider // etc...
Would be my preferred option. You could argue that you should have them in different projects per implementation (In memory, ORM, XML) and then the required implementation could be loaded in at runtime depending on your IoC container and requirements at the time.
To mess around with namespaces and add in the type of implementation in the name of the class is overkill and will make your namespaces look pointless to external/other developers.
If my domain object should contain string properties in 2 languages, should I create 2 separate properties or create a new type BiLingualString?
For example in plant classification application, the plant domain object can contain Plant.LatName and Plant.EngName.
The number of bi-lingual properties for the whole domain is not big, about 6-8, I need only to support two languages, information should be presented to UI in both languages at the same time. (so this is not locallization). The requirements will not change during development.
It may look like an easy question, but this decision will have impact on validation, persistance, object cloning and many other things.
Negative sides I can think of using new dualString type:
Validation: If i'm going to use DataAnattations, Enterprise Library validation block, Flued validation this will require more work, object graph validation is harder than simple property validation.
Persistance: iether NH or EF will require more work with complex properties.
OOP: more complex object initialization, I will have to initialize this new Type in constructor before I can use it.
Architecture: converting objects for passing them between layers is harder, auto mapping tools will require more hand work.
While reading your question I was thinking about why not localization all the time but when I read information should be presented to UI in both languages at the same time. I think it makes sense to use properties.
In this case I would go for a class with one string for each languages as you have mentioned BiLingualString
public class Names
{
public string EngName {get;set;}
public string LatName {get;set;}
}
Then I would use this class in my main Plant Class like this
public class Plant: Names
{
}
If you 100% sure that it will always be only Latin and English I would just stick with simplest solution - 2 string properties. It also more flexible in UI then having BiLingualString. And you won't have to deal with Complex types when persisting.
To help decide, I suggest considering how consistent this behavior will be at all layers. If you expose these as two separate properties on the business object, I would also expect to see it stored as two separate columns in a database record, for example, rather than two translations for the same property stored in a separate table. It does seem odd to store translations this way, but your justifications sound reasonable, and 6 properties is not un-managable. But be sure that you don't intend to add more languages in the future.
If you expect this system to by somewhat dynamic in that you may need to add another language at some point, it would seem to make more sense to me to implement this differently so that you don't have to alter the schema when a new language needs to be supported.
I guess the thing to balance is this: consider the likelihood of having to adjust the languages or properties to accommodate a new language against the advantage (simplicity) you gain by exposing these directly as separate properties rather than having to load translations as a separate level.
How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML?
For instance, let us say we have a Products table in our DB which has columns
Id
Title
Description
TaxLevel
Price
and on the other side we have for instance Products:
ProductId
ProdTitle
Text
BasicPrice
Quantity.
Currently I do it like this:
Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.).
Then I have methods like this:
InsertProduct(ThirdPartyProduct newproduct)
I do not use interfaces at the moment but I would like to. What I would like is implement something like
public class Contoso_ProductSynchronization : ProductSynchronization
{
public void InsertProduct(ContosoProduct p)
{
Product product = new Product(); // this is our Entity class
// do the assignments from p to product here
using(SyncEntities db = new SyncEntities())
{
// ....
db.AddToProducts(product);
}
}
// the problem is Product and ContosoProduct have no arhitectural connection right now
// so I cannot do this
public void InsertProduct(ContosoProduct p)
{
Product product = (Product)p;
using(SyncEntities db = new SyncEntities())
{
// ....
db.AddToProducts(product);
}
}
}
where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization).
Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.
In your replies, please focus on design and not so much on data access technologies because most are pretty straightforward to use (if you need to know, EF will be used for interacting with our database).
[EDIT: Design note]
Ok, from a design perspective I would do xslt on the incoming xml to transform it to a unified format. Also very easy to verify the result xml towards a schema.
Using xslt I would stay away from any interface or abstract class, and just have one class implementation in my code, the internal class. It would keep the code base clean, and the xslt's themselves should be pretty short if the data is as simple as you state.
Documenting the transformations can easily be done wherever you have your project documentation.
If you decide you absolutely want to have one class per xml (or if you perhaps got a .net dll instead of xml from one customer), then I would make the proxy class inherit an interface or abstract class (based off your internal class, and implement the mappings per property as needed in the proxy classes. This way you can cast any class to your base/internal class.
But seems to me doing the conversion/mapping in code will make the code design a bit more messy.
[Original Answer]
If I understand you correctly you want to map a ThirdPartyProduct class over to your own internal class.
Initially I am thinking class mapping. Use something like Automapper and configure up the mappings as you create your xml deserializing proxy's. If you make your deserialization end up with the same property names as your internal class, then there's less config to do for the mapper. Convention over Configuration.
I'd like to hear anyones thoughts on going this route.
Another approach would be to add a .ToInternalProduct( ThirdPartyClass ) in a Converter class. And keep adding more as you add more external classes.
The third approach is for XSLT guys. If you love XSLT you could transform the xml into something which can be deserialized into your internal product class.
Which one of these three I'd choose would depend on the skills of the programmer, and who will maintain adding new external classes. The XSLT approach would require no recompiling or compiling of code as new formats arrived. That might be an advantage.
It would be really handy to be able to somehow say that certain properties in the generated entity classes should, for example, be decorated by (say) validation attributes (as well as Linq To SQL column attributes).
Is it a T4 template someplace? Or are there other ways to skin the cat?
Damien Guard has written T4 templates that can be customized. See:
http://damieng.com/blog/2008/09/14/linq-to-sql-template-for-visual-studio-2008
...and:
http://visualstudiomagazine.com/listings/list.aspx?id=560
No, the SqlMetal tool is what handles the generation of the C# and it is defined within itself how the C# is generated (or VB for that matter).
I'm not familiar with the template style you want but you could try exteding the generated classes (if they aren't that big a change) since they are just partial classes.
Otherwise you would need to write/ look for a custom implementation of SqlMetal
Unfortunately, with partial classes you cannot add attributes to a member from another part of the partial class - i.e. if SqlMetal defines property Foo, you can't add an attribute to Foo in your own half of the .cs.
This takes away one of (usually) the more powerful ways of customizing such files... you would probably have to either take a chance and hand-edit the generated file (after detaching it from the dbml completely) - or write your own dbml parser frmo scratch (mayhbe using xslt). Not easy.
The workaround in Dynamic Data is by using a metadata class which can be decorated:
[MetadataType(typeof(Product_Meta))]
public partial class Product
{
public partial class Product_Meta
{
[Range(5, 50, ErrorMessage = "The product's reorder level must be greater than 5 and less than 50")]
public object ReorderLevel { get; set; }
}
}
http://rachelappel.com/asp-net-dynamic-data/custom-validation-in-asp-net-dynamic-data-using-attributes/