I am looking to retrofit a database to an existing codebase with as little pain as possible using NHibernate and an sqlite database. I would like to go about this by using the AutoMapper to map the fields that I tag with a [DatabaseField] custom attribute for any object that inherits from a custom DatabaseObject class. Anything marked with a [Cascade] custom attribute will be cascaded. I have the following code in a test project at the moment:
Entities to map:
class Scan : DatabaseObject
{
[DatabaseField]
public virtual string Name { get; set; }
[DatabaseField]
public virtual DateTime ScanDate { get; set; }
[DatabaseField]
[Cascade]
public virtual Scanner Scanner { get; set; }
public DateTime ManufactureDate { get; set; }
}
class Scanner : DatabaseObject
{
[DatabaseField]
public virtual string Name { get; set; }
}
Session setup:
ProjectConfiguration pcfg = new ProjectConfiguration();
var sessionFactory = Fluently.Configure()
.Database(SQLiteConfiguration.Standard.UsingFile("theTestScannerDatabase.sqlite"))
.Mappings(m => m.AutoMappings.Add(AutoMap.AssemblyOf<Scan>(pcfg)
.Override<Scan>(x => x.IgnoreProperty(y => y.ManufactureDate)).Conventions.Setup(x => x.AddFromAssemblyOf<Scan>())))
.ExposeConfiguration(BuildSchema)
.BuildSessionFactory();
return sessionFactory.OpenSession();
Project Configuration object
public class ProjectConfiguration : DefaultAutomappingConfiguration
{
public override bool ShouldMap(Type type)
{
return type.BaseType == typeof(DatabaseObject);
}
public override bool ShouldMap(Member member)
{
return member.MemberInfo.GetCustomAttributes().Contains(new DatabaseField());
}
}
The problem is with the "ManufactureDate" field which the automapper has tried to map and is upset that it isn't a virtual property, and similar things happen with private properties. I don't want to map every property of my objects to the database. I thought that the tags and the stuff in the ShouldMap overrides should take care of this.
The exception:
InvalidProxyTypeException: The following types may not be used as proxies:
SQLiteTestingApp.Scan: method get_ManufactureDate should be 'public/protected virtual' or 'protected internal virtual'
SQLiteTestingApp.Scan: method set_ManufactureDate should be 'public/protected virtual' or 'protected internal virtual'
For the record, if I remove this field everything else maps exactly how I want it to.
I have read about the Override and OverrideAll methods that I've tried to use to explicitly exclude these fields, but it doesn't seem to have any effect. I left an example of this attempt in my code snippet above.
So I guess I have two questions:
How can I tell the automapper to ignore anything I don't tag with my attribute?
If this isn't possible, what is the easiest way to map my existing objects to a database without creating a mapping class for every object I want to map?
Thanks in advance
Take a look at the documentation for ignoring properties.
You can use the IgnoreProperty method.
.Override<Scan>(map =>
{
map.IgnoreProperty(x => x.ManufactureDate);
});
All properties/methods in an entity also need to be virtual or implement an interface as per documentation for persistent classes.
A central feature of NHibernate, proxies, depends upon the persistent
class being non-sealed and all its public methods, properties and
events declared as virtual. Another possibility is for the class to
implement an interface that declares all public members.
Related
I have a set of interfaces using each others like this:
public interface IModel
{
string Name { get; }
IModelParameters Parameters { get; }
}
public interface IModelParameter
{
int Value { get; }
}
public interface IModelParameters: IList<IModelParameter>
{
void DoSomething();
}
And to implement those interfaces, I have defined those classes:
public class Model: IModel
{
string Name { get; internal set; }
public ModelParameters Parameters { get; private set; }
IModelParameters IModel.Parameters { get { return Factors; } }
}
public class ModelParameter: IModelParameter
{
int Value { get; internal set; }
}
public class ModelParameters: List<ModelParameter>, IModelParameters
{
void DoSomething()
{
// actual code
}
}
This does not compile because List<ModelParameter> implements IList<ModelParameter> and not IList<IModelParameter> as required by IModelParameters
Changing ModelParameters to be List<IModelParameter> fixes the compilation but it breaks Entity Framework migration generation because it no longer recognizes the list as a navigation property because the type parameter is an interface, not a regular class.
I could also have ModelParameters not implement IModelParameters and declare a second class that gets instantiated and filled directly in the IModelParameters.Factors getter inside Model
But this feels inefficient as it effectively creates two instances of the same list, one for Entity framework and a temporary one for use by the rest of the application. And because this temporary is filled at runtime, it introduces another potential point of failure.
This is why I'm trying to find a way to express the fact List<ModelParameter> implements IList<IModelParameter> just fine because ModelParameter implements IModelParameter itself.
I have a feeling that covariance/contravariance might be of help here, but I'm not sure how to use that.
You cannot do this. It it was possible to cast a List<ModelParameter> to IList<IModelParameter> you could try adding a object of another type to the list, i.e. class MyOtherModelParam : IModelParameter. And that is a contradiction since the type system guarantees that the list only contains ModelParameter objects.
You could replace it with IReadOnlyList<T>, since this interface do not expose any add or set methods it is safe to cast a List<ModelParameter> to IReadOnlyList<IModelParameter>.
Another possible solution would be to just remove the interface. If you intend to have only one implementation of IModelParameter, the interface serves little purpose, and you might as well just remove it.
I have a company entity
public class Company : Entity<Company>
{
public CompanyIdentifier Id { get; private set; }
public string Name { get; private set; }
..............
..........
}
A company can be a agent or supplier or both or none. (There are more types) Its behaviour should be change based on types. Agent can get commission and supplier is able to invoice.
What will be the best way to design the entity or entities or value objects? I have an option to add some boolean types and check those values inside methods,
public class Company : Entity<Company>
{
public CompanyIdentifier Id { get; private set; }
public string Name { get; private set; }
public bool IsAgent { get; private set; }
public bool IsSupplier { get; private set; }
..........
public void Invoice()
{
if(!IsSupplier)
{
throw exception.....;
}
//do something
}
public void GetCommission(int month)
{
if(!IsAgent)
{
throw exception.....;
}
//do something
}
..........
}
To be honest, I do not like this. Is there any design pattern which might help to overcome this scenerio? What will you do and why to design this scenerio?
Implement interfaces explicitly, then override the cast operator to only cast to that interface when valid.
public class Company : ...., IAgentCompany, ISupplierCompany ... {
public double IAgentCompany.GetCommission(int month) {
/*do stuff */
}
public static explicit operator IAgentCompany(Company c) {
if(!c.IsAgent)
throw new InvalidOperationException();
return this;
}
}
Explicit implementations of interfaces must be called through their interface, not the concrete type:
// Will not compile
new Company().GetCommission(5);
// Will compile
((IAgentCompany)new Company()).GetCommission(5)
But, now we've overloaded the explicit cast operator. So what does that mean? We can't call GetCommission without casting to IAgentCompany, and now we have a guard to prevent that cast for a company that isn't marked as an agent.
Good things about this approach:
1) You have interfaces that define the aspects of different types of companies and what they can do. Interface segregation is a good thing, and makes the abilities/responsibilities of each type of company clear.
2) You've eliminated a check for every function you want to call that is not "global" to all companies. You do one check when you cast, and then as long as you have it in a variable typed as the interface, you can happily interact with it without any further checking. This means less places to introduce bugs, and less useless checks.
3) You are leveraging the languages features, and exploiting the type system to help make the code more bullet-proof.
4) You don't have to write tons of subclasses that implement the various combinations of interfaces (possibly 2^n subclasses!) with NotImplementedExceptions or InvalidOperationException everywhere in your code.
5) You don't have to use an enum or a "Type" field, especially when you are asking to mix and match these sets of abilities (you'd don't just need an enum, but a flag enum). Use the type system to represent different types and behaviors, not an enum.
6) It's DRY.
Bad things about this approach:
1) Explicit interface implementations and overriding explicit cast operators aren't exactly bread and butter C# coding knowledge, and may be confusing to those who come after you.
Edit:
Well, I answered too quickly without testing the idea, and this doesn't work for interfaces. However, see my other answer for another idea.
I would look into separating the implementation for all those types in different classes. You could start doing this by using an enum to represent the company type.
public enum CompanyType
{
Agent = 0,
Supplier
}
public abstract class Company : Entity<Company>
{
public CompanyIdentifier Id { get; private set; }
public string Name { get; private set; }
public CompanyType EntityType { get; private set; }
public abstract void Invoice();
public abstract void GetCommission(int month);
...
This way you get less public properties.
Next, I'd implement specialized classes for supplier and agent (and then for both and none). You can make Company abstract and any specialized methods abstract as well.
This will allow you to separate the distinct behaviors of each type of entity. Comes in handy when you get back to it for maintenance. It also makes the code easier read/understand.
public class SupplierCompany : Company
{
public SupplierCompany()
{
EntityType = CompanyType.Supplier;
}
public override void Invoice()
{...}
public override void GetComission(int month)
{...}
}
public class AgentCompany : Company
{
public AgentCompany()
{
EntityType = EntityType.Agent;
}
public override void Invoice()
{...}
public override void GetComission(int month)
{...}
}
With this you can eliminate testing for various types in methods like Invoice and GetComission.
As with most DDD questions, it usually boils down to Bounded Contexts. I'd guess you're dealing with some distinct bounded contexts here (this is most obvious from your statement "A company can be a agent or supplier or both or none."). In at least one context you need to consider all Company entities equally, regardless of whether they are Agents or Suppliers. However I think you need to think about whether or not your Invoice or GetCommission operations are applicable in this broader context? I'd say those will apply in more specialized contexts, where the distinction between an Agent and a Supplier is much more crucial.
You may be running into trouble because you're trying to create an all encompassing Company entity which is applicable in all contexts... this is almost impossible to achieve without weird code constructs & fighting against the type system (as is being suggested in your other answers).
Please read http://martinfowler.com/bliki/BoundedContext.html
As a rough idea of how your contexts might look:
Broad "Company" Context
{
Entity Company
{
ID : CompanyIdentifier
Name : String
}
}
Specialized "Procurement" Context
{
Entity Supplier
{
ID : CompanyIdentifier
Name : String
Invoice()
}
}
Specialized "Sales" Context
{
Entity Agent
{
ID : CompanyIdentifier
Name : String
GetComission()
}
}
Does it make sense to try and use the same object in both Procurement and Sales contexts? These contexts have very different requirements after all. One of the lessons of DDD is that we split the domain into these bounded contexts, and do no try to make "God" objects which can do everything.
I'm attempting to simulate a scenario in which I am inheriting from concrete base classes in a 3rd party library, then mapping my own classes using Entity Framework Code First. I would really prefer for my classes to have the same simple name as the base classes. I obviously can't change the class names of the base classes, nor can I change the base class to abstract. As expected, I get the following error:
The type 'EfInheritanceTest.Models.Order' and the type
'EfInheritanceTest.Models.Base.Order' both have the same simple name
of 'Order' and so cannot be used in the same model. All types in a
given model must have unique simple names. Use 'NotMappedAttribute' or
call Ignore in the Code First fluent API to explicitly exclude a
property or type from the model.
As I understand it, in EF6 this is possible so long as only one of the classes is actually mapped. However, if I attempt to ignore the base class using the fluent API, I get the following error instead:
The type 'EfInheritanceTest.Models.Order' was not mapped. Check that
the type has not been explicitly excluded by using the Ignore method
or NotMappedAttribute data annotation. Verify that the type was
defined as a class, is not primitive or generic, and does not inherit
from EntityObject.
... which seems to indicate that by ignoring the base class, I ignore any subclasses as well. Full code below. Any way to work around this and "unignore" the subclass? Or is this a limitation of EF type mapping?
namespace EfInheritanceTest.Models.Base
{
public class Order
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual decimal Amount { get; set; }
}
}
namespace EfInheritanceTest.Models
{
public class Order : Base.Order
{
public virtual DateTime OrderDate { get; set; }
}
}
namespace EfInheritanceTest.Data
{
public class OrdersDbContext : DbContext
{
public OrdersDbContext() : base ("OrdersDbContext") { }
public IDbSet<EfInheritanceTest.Models.Order> Orders { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Types<Models.Base.Order>().Configure(c => c.Ignore());
modelBuilder.Types<Models.Order>().Configure(c => c.ToTable("order"));
modelBuilder.Entity<Models.Order>().Map(c =>
{
c.MapInheritedProperties();
c.ToTable("order");
});
}
}
}
It's a bit late, anyway: I was able to solve the issue with very simple configuration:
modelBuilder.Ignore<MyBaseClass>();
So EF don't care about MyBaseClass at all, but works with MyInheritedClass as it is not. Perfect!
The second error was related to:
modelBuilder.Types<Models.Base.Order>().Configure(c => c.Ignore());
as you excluded both classes from EF mapping (it excludes all hierarchy from Models.Base.Order).
There is also an excellent post on EF inheritance mapping.
I don't think that this will work if the child class has the same name as the parent class. I've definitely done this where the derived class has a different name than the parent class, but I doesn't look like this is possible when the names are the same (which I didn't know before). To test it, I took one of my projects where the inheritance was working with EF and I changed the names to fit the naming scheme that you have above and I'm getting the same errors that you list. It looks like this might be a limitation of the EF type mapping. Are you able to change the name of your derived class to be different than the parent?
I have been looking at the inner workings of the StockTrader RI for PRISM.
In this RI, MEF and a custom attribute system are used in combination to register views with regions as opposed to hooking up things to the RegionManager in the Module Initializer.
More specifically, there is a ViewExportAttribute which implements:
MetaDataAttribute
IViewRegionRegistration
The MetaDataAttribute and the "Attribute View" IViewRegionRegistration can be leveraged by System.Lazy<T,TMetaData> in AutoPopulateExportedViewsBehavior to achieve proper linking of regions and views.
In general the interplay between System.Lazy<T,TMetaData> and the actual metadata is elaborated here, more specifically the section "Using Strongly-typed Metadata".
To be clear, I understand the intent of Lazy and it clearly works. However, what I completely do not understand is where and how the link occurs between the metadata view supplied by the attribute (which is just an interface) and filling the TMetaData properties with the actual data supplied by the MetaDataAttribute.
To make my request even clearer, from the previously referenced example:
First, an interface is defined that can serve as a sort of template to pass certain metadata:
public interface IMessageSenderCapabilities
{
MessageTransport Transport { get; }
bool IsSecure { get; }
}
Next, A corresponding MetaDataAttribute is defined (which has the same properties as the previous interface)
[MetadataAttribute]
[AttributeUsage(AttributeTargets.Class, AllowMultiple=false)]
public class MessageSenderAttribute : ExportAttribute
{
public MessageSenderAttribute() : base(typeof(IMessageSender)) { }
public MessageTransport Transport { get; set; }
public bool IsSecure { get; set; }
}
The attribute can be used in an export, where actual values are set for the attribute properties:
[MessageSender(Transport=MessageTransport.Smtp, IsSecure=true)]
public class SecureEmailSender : IMessageSender
{
public void Send(string message)
{
Console.WriteLine(message);
}
}
Now finally, we can do some importing:
public class HttpServerHealthMonitor
{
[ImportMany]
public Lazy<IMessageSender, IMessageSenderCapabilities>[] Senders { get; set; }
public void SendNotification()
{
foreach(var sender in Senders)
{
if (sender.Metadata.Transport == MessageTransport.Smtp &&
sender.Metadata.IsSecure)
{
var messageSender = sender.Value;
messageSender.Send("Server is fine");
break;
}
}
}
}
In this last step: sender.Metadata.Transport is evaluated on that very Lazy<>. Therefore, somewhere along the way, Lazy is made aware of the actual values of the metadata, not just the interface it gets passed. I want to understand how that happens, who or what is responsible for that. Even if it is just a very general flow.
After some more Reflector I think I can start to formulate an answer, although it turns out a lot of things are happening so this answer might evolve. I am writing it down hear for the benefit of learning this myself.
MEFBootsrapper.Run()
...
MEFBootstrapper.Container.GetExports(...) because CompositionContainer : ExportProvider, ... and ExportProvider defines public Lazy<T, TMetadataView> GetExport<T, TMetadataView>().
Next private Lazy<T, TMetadataView> GetExportCore<T, TMetadataView>(string contractName)
Next internal static Lazy<T, M> CreateStronglyTypedLazyOfTM<T, M>(Export export)
In here, AttributedModelServices.GetMetadataView<M>(export.Metadata) where M is the type of the MetaDataView. Whereas export is itself of type System.ComponentModel.Composition.Primitives.Export and this has a field ExportDefenition of which an inherited AttributedExportDefenition exists.
AttributedExportDefenition.MetaData whose getter contains this._member.TryExportMetadataForMember(out strs);
TryExportMetadataForMember(...) finally has a check type.IsAttributeDefined<MetadataAttributeAttribute> to see if there is a MetadataAttribute applied such as for MessageSenderAttribute in the question.
So this is more or less (very roughly) how we get to the actual metadata on the export and so probably with some more detours these exported metadata will also reach the Lazy although I am still to find out how that would work exactly.
Any feedback would still be appreciated.
Trying to understand what is going on with the code in my original question has spawned another question:
There is a subtle difference between the StockTrader RI and the example provided in the MEF Documentation
In Stocktrader, ViewExportAttribute is defined:
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
[MetadataAttribute]
public sealed class ViewExportAttribute : ExportAttribute, IViewRegionRegistration
{
... omitted for brevity ...
}
The MEF docs give a similar example (also in the original question):
[MetadataAttribute]
[AttributeUsage(AttributeTargets.Class, AllowMultiple=false)]
public class MessageSenderAttribute : ExportAttribute
{
public MessageSenderAttribute() : base(typeof(IMessageSender)) { }
public MessageTransport Transport { get; set; }
public bool IsSecure { get; set; }
}
So with the above code blocks, the difference is that in the first case, the attribute derives from the Interface that defines the "metadata view" whereas in the second example, this is not the case; The attribute just has the same properties as the IMessageSenderCapabilities interface.
"No big deal" you would think but then in StockTrader RI:
[ImportMany(AllowRecomposition = true)]
public Lazy<object, IViewRegionRegistration>[] RegisteredViews { get; set; }
Whereas in the MEF Example:
[ImportMany]
public Lazy<IMessageSender, IMessageSenderCapabilities>[] Senders { get; set; }
So here, the difference is that in Stocktrader RI, the type that we are trying to Lazily import is not specified (it is just object) whereas in the second it is defined more specifically (IMessageSender).
The end result is more or less the same, some type is Lazily imported along with metadata.
However, what I would like to also learn is:
If the differences at the individual key points in both examples are related.
Specifically in the stock trader example, how do we know what to import as Lazy? Is it because the ViewExportAttribute specifically derives from IViewRegionRegistration that we can have Lazy<object, ... later on, i.e. that the system knows what to import because only types with that metadata will be imported? All this without specifying that object will actually be views, i.e. UserControls?
Does Fluent NHibernate has a simple method for automapping entities?
Let's say I have some classes like the following one and corresponding classmaps:
public sealed class Hello
{
public String Name { get; set; }
public DateTime Timestamp { get; set; }
}
public class HelloMapping : ClassMap<Hello>
{
public HelloMapping()
{
Not.LazyLoad();
// Some Id here
Map(x => x.Name).Not.Nullable().Length(64);
Map(x => x.Timestamp).Not.Nullable();
}
}
So, does Fluent NHibernate has something like "add every mapped entity like Hello"?
If not, what's the easiest way to let the NHibernate use my mappings provided?
It depends on what you mean by "like"?
Do you mean all entities in the same namespace? Then you can do
public class MyConfiguration : DefaultAutomappingConfiguration {
public override bool ShouldMap(Type type) {
return type.Namespace == typeof(Hello).Namespace;
}
}
Whatever you mean, you can probably set a convention to do what it is you are trying to achieve. See auto mapping in Fluent NHibernate.
Short answer: http://wiki.fluentnhibernate.org/Auto_mapping. You can use objects and basic conventions built into FluentNH to map objects that don't require much custom behavior.
You could also use inheritance to define mappings that have common elements across most or all classes. Say Hello is a base class that defines Id, Name and Timestamp. You can define the mapping for this base class, then either derive from it directly to produce mappings for other objects, or you could define JoinedSubclass mappings for objects that should be stored in a common table structure (usually because they are various flavors of a base class, like CheckingAccount, SavingsAccount and MoneyMarketAccount are all different types of BankAccounts with substantially similar data structures).