How do I avoid requiring code like this:
public static class BusinessLogicAutomapper
{
public static bool _configured;
public static void Configure()
{
if (_configured)
return;
Mapper.CreateMap<Post, PostModel>();
_configured = true;
}
}
in my BL assembly, and having to call Configure() from my Global.asax in my MVC application?
I mean, I expect a call like this:
public PostModel GetPostById(long id)
{
EntityDataModelContext context = DataContext.GetDataContext();
Post post = context.Posts.FirstOrDefault(p => p.PostId == id);
PostModel mapped = Mapper.Map<Post, PostModel>(post);
return mapped;
}
to Mapper.Map<TIn,TOut> to produce the mapper if it isn't in existance, instead of having to create it myself manually (I shouldn't even know about this inner working). How can I work around declaratively creating mappers for AutoMapper?
A solution that's natural to AutoMapper would be desired, but an extension or some architectural change in order to avoid this initialization would work too.
I'm using MVC 3, .NET 4, and no IoC/DI (yet, at least)
I completely misunderstood what you were trying to do in my original answer. You can accomplish what you want by implementing part of the functionality of AutoMapper using reflection. It will be of very limited utility and the more you extend it, the more like AutoMapper it will be so I'm not sure that there's any long term value to it.
I do use a small utility like what you are wanting to automate my auditing framework to copy data from a entity model to its associated audit model. I created it before I started using AutoMapper and haven't replaced it. I call it a ReflectionHelper, the below code is a modification of that (from memory) -- it only handles simple properties but can be adapted to support nested models and collections if need be. It's convention-based, assuming that properties with the same name correspond and have the same type. Properties that don't exist on the type being copied to are simply ignored.
public static class ReflectionHelper
{
public static T CreateFrom<T,U>( U from )
where T : class, new
where U : class
{
var to = Activator.CreateInstance<T>();
var toType = typeof(T);
var fromType = typeof(U);
foreach (var toProperty in toType.GetProperties())
{
var fromProperty = fromType.GetProperty( toProperty.Name );
if (fromProperty != null)
{
toProperty.SetValue( to, fromProperty.GetValue( from, null), null );
}
}
return to;
}
Used as
var model = ReflectionHelper.CreateFrom<ViewModel,Model>( entity );
var entity = ReflectionHelper.CreateFrom<Model,ViewModel>( model );
Original
I do my mapping in a static constructor. The mapper is initialized the first time the class is referenced without having to call any methods. I don't make the logic class static, however, to enhance its testability and the testability of classes using it as a dependency.
public class BusinessLogicAutomapper
{
static BusinessLogicAutomapper
{
Mapper.CreateMap<Post, PostModel>();
Mapper.AssertConfigurationIsValid();
}
}
check out Automapper profiles.
I have this setup in my Global.asax - it runs once statically so everything is setup at runtime ready to go.
I also have 1 unit test which covers all maps to check they are correct.
A good example of this is Ayendes Raccoon Blog
https://github.com/ayende/RaccoonBlog
Related
Let's consider a dynamic mapping of an anonymous type to a predefined one:
Mapper.Initialize(cfg => cfg.CreateMissingTypeMaps = true);
var dest = Mapper.Map<Dest>(new {a = 42});
I would like to ensure all Dest properties were mapped. Actual values don't matter, it can be null on anything else.
This check would be similar to Mapper.Configuration.AssertConfigurationIsValid() (it doesn't consider new type maps) performed before/after each map call.
Simple property-to-property Reflection comparison won't work because Automapper configuration should be taken into account (I mean all these nice features like automatic type conversion, flattening etc.). So the check should use Automapper API... or not?
P.S. I understand that it could significantly decrease the perfomance. The idea is to achieve a kind of code contract and enable it in dev configurations only. Any suggestions?
I think unit tests, which will test your mapping logic, will do the same job as you expecting from your "framework".
Only, instead of using Automapper straight in your code, create wrapper which you will use everywhere you need and which you will test
public static class MapperWrapper
{
public static Dest Map(object source)
{
return Mapper.Map<Dest>(source);
}
}
Unit tests will provide "code contract" for MapperWrapper.Map method and Automapper configuration. Tests will also play a role of "safe net" for developers when they "play" with Automapper configuration.
// In example used
// NUnit as test framework
// FluentAssertions as assertion framework
[TestFixture]
public class MapperWrapperTests
{
[Test]
public void Map_ShouldReturnInstanceWithCorrectValueOfAProperty()
{
var expectedA = 42;
var input = new { A = expectedA };
var actualResult = MapperWrapper.Map(input);
actualResult.A.Should().Be(expectedA);
}
// FluentAssertions framework provide "deep object graph" assertion
[Test]
public void Map_ShouldReturnInstanceWithCorrectValues()
{
var expectedResult = new Dest
{
Id = 42,
Name = "Forty-Two",
SomeAnotherType = new AnotherType { IsValid = true }
};
var input = new
{
Id = "42",
Name = "Forty-Two",
SomeAnotherType = new { IsValid = true }
};
var actualResult = MapperWrapper.Map(input);
actualResult.ShouldBeEquivalentTo(expectedResult);
}
}
By executing those tests on every build you will get feedback about is Automapper configuration correct.
Instead of using singleton MapperWrapper you can introduce abstraction of it (interface or abstract class), implements that abstraction with Automapper methods.
With abstraction your application will not depend tightly on Automapper.
What you're looking for is AutoMapper configuration validation, it does all the validations you point out. But instead of creating maps at runtime, simply create them up front (this has the added benefit of not accidentally creating invalid/incomplete/impossible maps).
Remove the CreateMissingTypeMaps = true part and create Profile types with explicit CreateMap calls to your types you want to map.
I am trying to create a new core framework (web mostly) with Repository and Unit Of Work pattern for my applications that i can able to change my ORM to NHibernate or Dapper later on.
Right now my interface of Unit of work is like this :
public interface IUnitOfWork : IDisposable
{
void Commit();
void Rollback();
}
And Entity Framework implementation is like this (trimmed for readability)
public class EfUnitOfWork : IUnitOfWork
{
....
public EfUnitOfWork(ApplicationDbContext context)
{
this._context = context;
this._transaction = new EfTransaction(_context.Database.BeginTransaction());
}
public void Commit()
{
this._context.SaveChanges(true);
this._transaction.Commit();
...
}
public void Rollback()
{ ...
}
}
The problem is that in my Service Layer that contains business logic i can do something like this with the navigations properties:
public bool CreateCity(CityCreateModel model)
{
using (var uow = _unitOfWorkFactory.Create())
{
var city = new City();
city.Name = model.Name;
city.State = new State() { Country = new Country() { Name = "SomeCountry" }, Name = "SomeCity" };
_cityRepository.Create(city);
try
{
uow.Commit();
return true;
}
catch (Exception)
{
uow.Rollback();
throw;
}
}
}
The repository Create method is pretty straightforward as i use entity framework :
public void Create(City entity)
{
_set.Add(entity);
}
The problem begins here , when a member of team writes a code like the Service example with using new keyword on navigation properties or adding items for collection navigation properties, entity framework detects these changes and when i save changes, these are also saved to the database.
If i try to change existing sample to Dapper.NET or to a REST service later on there can be a LOT of problems that i had to go look for every navigation property and track that they have been changed or not and write a lot of (possibly garbage) code for them as i didn't really know what is inserted on the table via entity framework and what isnt (because of navigation properties are also inserted and my repositories called once for only 1 insert that is for City in my example above)
Is there a way to prevent this behavior or is there a pattern known that i can adapt early on so i won't have problems later on?
How did you overcome this?
Before I begin I want to give some notes to your code:
public EfUnitOfWork(ApplicationDbContext context)
{
this._context = context;
this._transaction = new EfTransaction(_context.Database.BeginTransaction());
}
1) From your example I can see that you are sharing the same DbContext(given as parameter in the constuctor for the whole application. I do not think this is a good idea, because the entities will be cached in the first level cache and the change tracker will track them all. With this approach will get soon performance problems when the database will be growth.
_cityRepository.Create(city);
public void Create(City entity)
{
_set.Add(entity);
}
2) The base repository should be generic of type T where T is an entity! and so you can create a city;
var city = _cityRepository.Create();
Fill the city or provide the data as parameters in the create method.
Back to your question:
Is there a way to prevent this behavior or is there a pattern known that i can adapt early on so i won't have problems later on?
Each ORM has his own desgin concept and it is not easy to find generic way which fit to them all that way I would do the following:
1) Separate the repository contracts in one the assembly (contracts dll)
2) For each ORM Framework use a separate assembly which implement the repository contracts.
Example:
public interface ICityRepository<City> :IGenericRepsotiory<City>
{
City Create();
Find();
....
}
Entity Frmework assembly:
public class CityRepositoryEF : ICityReposiory
{
..
Dapper Frmework assembly:
public class CityRepositoryDapper : ICityReposiory
{
..
You can find a brilliant walk through if you follow the URL below. It is authored by Julie Lerman who is an entity framework evangelist.
http://thedatafarm.com/data-access/agile-entity-framework-4-repository-part-1-model-and-poco-classes/
I'm trying to get my head around this issue where I am using the Entity Framework (6) in an N-tier application. Since data from the repository (which contains all communication with the database) should be used in a higher tier (the UI, services etc), I need to map it to DTOs.
In the database, there's quite a few many-to-many relationships going on, so the datastructure can/will get complex somewhere along the line of the applications lifetime. What I stumbled upon is, that I am repeating the exact same code when writing the repository methods. An example of this is my FirmRepository which contains a GetAll() method and GetById(int firmId) method.
In the GetById(int firmId) method, I have the following code (incomplete since there's a lot more relations that needs to be mapped to DTOs):
public DTO.Firm GetById(int id)
{
// Return result
var result = new DTO.Firm();
try
{
// Database connection
using (var ctx = new MyEntities())
{
// Get the firm from the database
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
// If a firm was found, start mapping to DTO object
if (firm != null)
{
result.Address = firm.Address;
result.Address2 = firm.Address2;
result.VAT = firm.VAT;
result.Email = firm.Email;
// Map Zipcode and City
result.City = new DTO.City()
{
CityName = firm.City.City1,
ZipCode = firm.City.ZipCode
};
// Map ISO code and country
result.Country = new DTO.Country()
{
CountryName = firm.Country.Country1,
ISO = firm.Country.ISO
};
// Check if this firm has any exclusive parameters
if (firm.ExclusiveParameterType_Product_Firm.Any())
{
var exclusiveParamsList = new List<DTO.ExclusiveParameterType>();
// Map Exclusive parameter types
foreach (var param in firm.ExclusiveParameterType_Product_Firm)
{
// Check if the exclusive parameter type isn't null before proceeding
if (param.ExclusiveParameterType != null)
{
// Create a new exclusive parameter type DTO
var exclusiveParameter = new DTO.ExclusiveParameterType()
{
ID = param.ExclusiveParameterType.ID,
Description = param.ExclusiveParameterType.Description,
Name = param.ExclusiveParameterType.Name
};
// Add the new DTO to the list
exclusiveParamsList.Add(exclusiveParameter);
}
}
// A lot more objects to map....
// Set the list on the result object
result.ExclusiveParameterTypes = exclusiveParamsList;
}
}
}
// Return DTO
return result;
}
catch (Exception e)
{
// Log exception
Logging.Instance.Error(e);
// Simply return null
return null;
}
}
This is just one method. The GetAll() method will then have the exact same mapping logic which results in duplicated code. Also, when more methods gets added, i.e. a Find or Search method, the same mapping needs to be copied again. This is, of course, not ideal.
I have read a lot about the famous AutoMapper framework that can map entites to/from DTOs, but since I have these many-to-many relations it quickly feels bloated with AutoMapper config code. I've also read this article, which make sense in my eyes: http://rogeralsing.com/2013/12/01/why-mapping-dtos-to-entities-using-automapper-and-entityframework-is-horrible/
Is there any other way of doing this without copy/pasting the same code over and over again?
Thanks in advance!
You can make an extension method on Entity firm (DB.Firm) like this,
public static class Extensions
{
public static DTO.Firm ToDto(this DB.Firm firm)
{
var result = new DTO.Firm();
result.Address = firm.Address;
result.Address2 = firm.Address2;
//...
return result;
}
}
Then you can convert DB.Firm object anywhere in your code like firm.ToDto();
An alternate strategy is to use a combination of the class constructor and an explicit and/or implicit conversion operator(s). It allows you to cast one user-defined entity to another entity. The feature also has the added benefit of abstracting the process out so you aren't repeating yourself.
In your DTO.Firm class, define either an explicit or implicit operator (Note: I am making assumptions about the name of your classes):
public class Firm {
public Firm(DB.Firm firm) {
Address = firm.Address;
Email = firm.Email;
City = new DTO.City() {
CityName = firm.City.City1;
ZipCode = firm.City.ZipCode;
};
// etc.
}
public string Address { get; set;}
public string Email { get; set; }
public DTO.City City { get; set; }
// etc.
public static explicit operator Firm(DB.Firm f) {
return new Firm(f);
}
}
You can then use it in your repository code like this:
public DTO.Firm GetById(int id) {
using (var ctx = new MyEntities()) {
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
return (DTO.Firm)firm;
}
}
public List<DTO.Firm> GetAll() {
using (var ctx = new MyEntities()) {
return ctx.Firms.Cast<DTO.Firm>().ToList();
}
}
Here's the reference in MSDN.
About mapping: it actually does not really matter if you use Automapper or prepare you mappings completely manually in some method (extension one or as explicit casting operator as mentioned in other answers) - the point is to have it in one place for reusability.
Just remember - you used FirstOrDefault method, so you actually called the database for a Firm entity. Now, when you are using properties of this entity, especiallly collections, they will be lazy loaded. If you have a lot of them (as you suggest in your question), you may face a huge amount of additional call and it might be a problem, especcially in foreach loop. You may end up with dozen of calls and heavy performace issues just to retrieve one dto. Just rethink, if you really need to get such a big object with all its relations.
For me, your problem is much deeper and considers application architecture. I must say, I personally do not like repository pattern with Entity Framework, in addition with Unit Of Work pattern. It seems to be very popular (at least of you take a look at google results for the query), but for me it does not fit very well with EF. Of course, it's just my opinion, you may not agree with me. For me it's just building another abstraction over already implemented Unit Of Work (DbContext) and repositories (DbSet objects). I found this article very interesing considering this topic. Command/query separation way-of-doing-things seems much more elegant for me, and also it fits into SOLID rules much better.
As I said, it's just my opinion and you may or may not agree with me. But I hope it gives you some perpective here.
I have:
Category class
public partial class Category : BaseEntity
{
...
public string Name { get; set; }
private ICollection<Discount> _appliedDiscounts;
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ?? (_appliedDiscounts = new List<Discount>()); }
protected set { _appliedDiscounts = value; }
}
}
Service:
public IList<Category> GetCategories()
{
// ado.net to return category entities.
}
public ICollection<Discount> GetDiscount(int categoryId)
{
// ado.net .
}
I don't want to use ORM like EF.. but plain ado.net and i don't want to put in ugly Lazy<T> in my domain definition, for e.g public Lazy....
So how in this case could I get AppliedDiscounts automatically get binded lazy to GetDiscount without using explicitly declaration of Lazy<T> on the Category class ?
I don't know why you don't like the Lazy<T> type - it is simple, useful and you don't have to worry about anything.
And no one forces you to use public Lazy<IEnumerable<Discount>> GetDiscounts();
You could use it internally:
Lazy<IEnumerable<Discount>> discounts = new Lazy<IEnumerable<Discount>>(() => new DiscountCollection());
public IEnumerable<Discount> GetDiscounts()
{
return discounts.Value;
}
It operates as intended - until no one asks for discounts it won't be created.
If you really want - you could create your own implementation. Something like Singleton class in Richter's "CLR via C#" book (because Lazy has all the 'properties' of a proper singleton container - thread safety, only one instance of inner 'singleton' value could be evaluated...).
But do you really want to create it and test? You will just replace a well-designed standard component with a fishy custom one.
AFTER ACTUALLY READING YOUR QUESTION WITH ATTENTION
1) If your lazy loading does not need any thread safety you could accomplish similar behaviour even without any Lazy or complex constructs - just use Func delegate:
public partial class Category : BaseEntity
{
...
private Func<ICollection<Discount>> getDiscounts;
public Category(Func<ICollection<Discount>> getDiscounts) { ... }
public string Name { get; set; }
private ICollection<Discount> _appliedDiscounts;
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ??
(_appliedDiscounts = new List<Discount>(this.getDiscounts())); }
protected set { _appliedDiscounts = value; }
}
}
public IList<Category> GetCategories()
{
// ado.net to return category entities.
... = new Category(() => this.GetDiscount((Int32)curDataObject["categoryId"]))
}
public ICollection<Discount> GetDiscount(int categoryId)
{
// ado.net .
}
If you inject your service it will be even more simple:
public virtual ICollection<Discount> AppliedDiscounts
{
get { return _appliedDiscounts ??
(_appliedDiscounts = new List<Discount>(this.service.GetDiscounts(this.CategoryId))); }
protected set { _appliedDiscounts = value; }
}
2) If you need to use these objects in multiple threads then you will have to redesign your classes - they don't look like threadsafe.
AFTER THE COMMENT
what i want to do is exactly just like this guy
stackoverflow.com/questions/8188546/… . I want to know the concept how
ORM like EF do with the domain, keep it clean and separated from
injecting service class but still able to handle lazy loading. I know
i can use Reflection to get all the object properties and its object
variables(like AppliedDiscounts), but dont' know how to transform
these dynamically to lazy type so that it could be loaded later when
needed.
It is universal principle that you can't get something for nothing. You can't make your entities both clean and separated from any services(even through some proxy), and to allow them to load lazily - if they don't know anything about services and services don't know anything about them then how would the lazy loading work? There is no way to achieve such absolute decoupling(for two components to interact they have to either know about each other, know about some third module-communicator, or some module should know about them. But such coupling could be partially or completely hidden.
Technologies that provide entity object models usually use some of the following techniques:
Code generation to create wrappers(or proxies) above your simple data objects, or solid instances of your interfaces. It could be C# code or IL weaving, well, it could be even an in-memory assembly created dynamically in runtime using something like Reflection.Emit. This is not the easiest or most direct approach, but it will give you enormous code-tuning capabilities. A lot of modern frameworks use it.
Implementation of all those capabilities in Context classes - you won't have the lazy loading in your end objects, you will have to use it explicitly with Context classes: context.Orders.With("OrderDetails"). The positive side is that the entities will be clean.
Injection of service(or only of the needed subset of its operations) - that's what you'd prefer to avoid.
Use of events or some other observer-like pattern - your entities
will be clean from service logic and dependencies(at least in some
sense), but will contain some hookup infrastructure that won't be
very straightforward or easy to manage.
For your custom object model 2 or 3 are the best bets. But you could try 1 with Roslyn
In my previous applications when I used linq-to-sql I would always use one class to put my linq-to-sql code in, so I would only have one DataContext.
My current application though is getting too big and I started splitting my code up in different classes (One for Customer, one for Location, one for Supplier...) and they all have their own DataContext DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
Now when I try to save a contact with a location (which I got from a different DataContext) I get the following error:
"An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported."
I assume this is because I create a DataContext for every class, but I wouldn't know how to this differently?
I'm looking for any ideas, thanks.
My classes look like the following:
public class LocatieManagement
{
private static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
public static IEnumerable<locatie> getLocaties()
{
var query = (from l in dc.locaties
select l);
IEnumerable<locatie> locaties = query;
return locaties;
}
public static locatie getLocatie(int locatie_id)
{
var query = (from l in dc.locaties
where l.locatie_id == locatie_id
select l).Single();
locatie locatie = query;
return locatie;
}
}
That happens if the entity is still attached to the original datacontext. Turn off deferred loading (dc.DeferredLoadingEnabled = false):
partial class SomeDataContext
{
partial void OnCreated()
{
this.DeferredLoadingEnabled = false;
}
}
You may also need to serialize/deserialize it once (e.g. using datacontractserializer) to disconnect it from the original DC, here's a clone method that use the datacontractserializer:
internal static T CloneEntity<T>(T originalEntity) where T : someentitybaseclass
{
Type entityType = typeof(T);
DataContractSerializer ser =
new DataContractSerializer(entityType);
using (MemoryStream ms = new MemoryStream())
{
ser.WriteObject(ms, originalEntity);
ms.Position = 0;
return (T)ser.ReadObject(ms);
}
}
This happens because you're trying to manage data from differing contexts - you will need to properly detach and attach your objects to proceed - however, I would suggest preventing the need to do this.
So, first things first: remove the data context instances from your entity classes.
From here create 'operational' classes that expose the CRUDs and whatnot to work with that specific type of entity class, which each function using a dedicated data context for that unit of work, perhaps overloading to accept a current context for when a unit of work entails subsequent operations.
I know everybody probably gets tired of hearing this, but you really should look at using Repositories for Data Access (and using the Unit of Work pattern to ensure that all of the repositories that are sharing a unit of work are using the same DataContext).
You can read up on how to do things here: Revisiting the Repository and Unit of Work Patterns with Entity Framework (the same concepts apply to LINQ to SQL as well).
Another solution I found for this is to create one parent class DataContext
public class DataContext
{
public static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
}
And let all my other classes inherit this one.
public class LocatieManagement : DataContext
{
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
}
Then all the classes use the same DataContext.