What are the different approaches to Object-Object mapping in .NET? - c#

I'm needing to do some mapping between objects (e.g. PersonModel to PersonViewModel) and am researching on the different approaches to do this. Specifically I'm working with Entity Framework and trying to map the generated models to a viewmodel.
However, I've yet to find an article or resource that compiles a list of how you can do this. So far, I've come across the following:
Implicit Conversion (I think this is the most basic approach? since you manually map properties from one object to another, its simple but tedious)
Extension Methods (haven't worked with this yet)
Reflection (I've tinkered a bit, but only managed very basic mapping)
Automapper (VERY popular, but I'm having trouble making it work well with EF)
Value Injecter (haven't worked with this yet)
Emit Mapper (haven't worked with this yet, but probably I would have trouble making it work with EF?)
Can you please help point out and elaborate on the approaches available there, as well as pros / cons of each? For example, I saw some that mentioned Automapper is slow compared to manual mapping? Or possibly, point out an article that tackles this?
EDIT: since some may ask what problem I have with AutoMapper, please see this: Automapper: How to map IList to EntityCollection

Well, I can give you a way where you do your own mapping, pretty simple to do and can be executed quickly over a large amount of data. I'll show you what I'd do, and then try to elaborate on why I do what I do. Here goes:
public class PersonViewModel
{
public static Expression<Func<Person, PersonViewModel>> FromPerson
{
get
{
return p => new PersonViewModel
{
Name = p.FirstName,
SurName = p.LastName
};
}
}
public string Name { get; set; }
public string SurName { get; set; }
public static PersonViewModel CreateViewModel(Person original)
{
var func = FromPerson.Compile();
var vm = func(original);
return vm;
}
}
Now you'll notice that I have 2 ways to convert from a Person EF model to a ViewModel. This is because the first one, which uses the Expression>, is used to convert a large bulk of object in a Select() statement. Simple usage:
return people.Select(PersonViewModel.FromPerson);
In this case we've probably retrieved a collection of Person objects from the DB and need to show them, say, in a list or something, but using the ViewModel. In this way the operation is performed in bulk and is much faster than simply creating all the objects via the other method. Now, the static CreateViewModel method can be used to map a single object where needed. An example is if you've gotten a single user's data from the DB and need to show it, but using your ViewModel. In that case, it would be appropriate to use the static method, instead of the Expression, which is mainly for bulk conversions.
That's what I can offer, aside from wondering what's wrong with using AutoMapper, since it's pretty straightforward and you haven't really elaborated on what the problem is with using it alongside EF. Hope this helps you at least a little bit in your problem :)

Well, if you do know the objects’ types upfront then the accepted answer works great.
If not I’d go with AutoMapper or PropMapper.
If you want to roll something of your own, the most “up to date” approach is to use compiled Expression trees. You enumerate the type’s properties and then build a block of assign expressions for each property, and “compile” this block:
var e = Expression.Assign(Expression.Property(srcObj, prop1), Expression.Property(destObj, prop2)));
Here’s a step-by-step blog post on this: https://dev.to/alexjitbit/yet-another---lightning-fast---object-mapper-for-net-2bj2

Related

Asp.net C# model object binding

The product have some fields that can not be changed,
so I want to bind the object with only selected field.
For now I'm doing this(below) way (like binding manually), but I believe there is better and clean way. How to binding Model object to model object with only selected fields?
[HttpPut]
public JsonResult update(Product editedProduct) {
Product originalProduct = unitOfWork.ProductRepository.Get(filter: q => q.no == editedProduct.no).Single();
originalProduct.name = editedProduct.name;
originalProduct.modelNo = editedProduct.modelNo;
originalProduct.size = editedProduct.size;
originalProduct.color = editedProduct.color;
originalProduct.description = editedProduct.description;
originalProduct.price = editedProduct.price;
//originalProduct.upc = editedProduct.upc; //UPC can not be changed
//originalProduct.sku = editedProduct.sku; //SKU can not be changed
unitOfWork.Save();
return Json(new { success = true });
}
please advise me,
In my opinion there is absolutely nothing wrong with this approach. It is possible to do some things differently but it does not mean it is better.
Create a DTO/ViewModel class to represent the class you accept and return from the service. This way you can have different shapes of the data if you need it. For example you can skip a security critical field. I think this will be an improvement.
Use a framework like AutoMapper to do the mapping between the objects. This is quite popular approach but I personally prefer explicitly copying the fields.
You can update the object without retrieving it. I assume you are using Entity Framework. You can refer to this question for details - How to update a record without selecting that record again in ADO.NET Entity Framework? . I personally don't think that this will improve your code. I think you should do it only if you have performance issues with your current approach.
You can put the mapping code in your repository in an Update method or something.
BTW it seems like your repository is currently useless. You are just writing a wrapper around your ORM which makes the code more complex and more buggy. Repository is an anti-pattern when you are using an ORM. Your ORM is the repository.
Well, don't do it.
For this case you should create a separate ViewModel with only necessary fields.
I would not expose the setter in the class for example.
public class Product{
public string upc {get;}
}
This will not allow the property to be set.

MongoDB: How to define a dynamic entity in my own domain class?

New to MongoDB. Set up a C# web project in VS 2013.
Need to insert data as document into MongoDB. The number of Key-Value pair every time could be different.
For example,
document 1: Id is "1", data is one pair key-value: "order":"shoes"
document 2: Id is "2", data is a 3-pair key-value: "order":"shoes", "package":"big", "country":"Norway"
In this "Getting Started" says because it is so much easier to work with your own domain classes this quick-start will assume that you are going to do that. suggests make our own class like:
public class Entity
{
public ObjectId Id { get; set; }
public string Name { get; set; }
}
then use it like:
var entity = new Entity { Name = "Tom" };
...
entity.Name = "Dick";
collection.Save(entity);
Well, it defeats the idea of no-fixed columns, right?
So, I guess BsonDocument is the the model to use and is there any good samples for beginners?
I'm amazed how often this topic comes up... Essentially, this is more of a 'statically typed language limitation' than a MongoDB issue:
Schemaless doesn't mean you don't have any schema per se, it basically means you don't have to tell the database up front what you're going to store. It's basically "code first" - the code just writes to the database like it would to RAM, with all the flexibility involved.
Of course, the typical application will have some sort of reoccurring data structure, some classes, some object-oriented paradigm in one way or another. That is also true for the indexes: indexes are (usually) 'static' in the sense that you do have to tell mongodb about which field to index up front.
However, there is also the use case where you don't know what to store. If your data is really that unforeseeable, it makes sense to think "code first": what would you do in C#? Would you use the BsonDocument? Probably not. Maybe an embedded Dictionary does the trick, e.g.
public class Product {
public ObjectId Id {get;set;}
public decimal Price {get;set;}
public Dictionary<string, string> Attributes {get;set;}
// ...
}
This solution can also work with multikeys to simulate a large number of indexes to make queries on the attributes reasonably fast (though the lack of static typing makes range queries tricky). See
It really depends on your needs. If you want to have nested objects and static typing, things get a lot more complicated than this. Then again, the consumer of such a data structure (i.e. the frontend or client application) often needs to make assumptions that make it easy to digest this information, so it's often not possible to make this type safe anyway.
Other options include indeed using the BsonDocument, which I find too invasive in the sense that you make your business models depend on the database driver implementation; or using a common base class like ProductAttributes that can be extended by classes such as ProductAttributesShoes, etc. This question really revolves around the whole system design - do you know the properties at compile time? Do you have dropdowns for the property values in your frontend? Where do they come from?
If you want something reusable and flexible, you could simply use a JSON library, serialize the object to string and store that to the database. In any case, the interaction with such objects will be ugly from the C# side because they're not statically typed.

DTO simplified. Dynamic tuples. How? Let's see a possible solution [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've been thinking about a possible usage of dynamic keyword.
Being a big defender (say it paladin) of strongly-typed programming languages, I needed to open my mind a lot and try to think
an alternative way of doing things.
Some thing I found hard to mantain is how layers and tiers are communicated with data transfer objects (DTO).
How many dozens of DTOs have you needed to cover transferring data through your application or service? I'm sure we can count a lot
of them in any well-done project.
But, what about the domain object-to-DTO translation? You end up creating dozens of DTO which transfer info from a number of domain
objects evolved in some transaction, flow or process.
In the end, what are the most important things to keep in mind when you develop software?
- Money and/or time first. Sizing the project to cover it with the right solution with available resources.
- Create a maintainable, scalable, performant, well-designed application. Everything with our limited resources.
Talking about the second point, what is maintanable and scalable? A incredibly well-done design which has thousands of classes,
interfaces, ... to maintain and improve? Yes. Think in a well-done DDD, MVC, solution. Easy to follow, improve, fix... But not in terms
of time. You can do it right, but you'll need a lot of time.
Perhaps there's a solution for that with the flagship Dynamic Language Runtime and associated "dynamic" keyword in C# 4.0 on top of
.NET Framework 4.0.
Yes, dynamic seems evil if it's not for interop or working with dynamic languages... Right! But, remembering what I said in the
second sentence, I wanted to go forward and think about this topic.
We've tuples. Good! We can have typed multiple results as a return value in a method or property. We don't need custom DTO! Wrong because
we don't have named tuples. How can we mantain a code that's giving results with properties like "Item1, Item2, ItemN".
Tuples seem to be a good solution for example for replacing the usage of "out" keyword in methods like bool TryParse(value, out result)
into Tuple TryParse(T value), but I don't think this is a good replacement for DTO, because we're loosing the meaning of
the code... Code is readable no more.
Later, I was thinking in how to create attribute-based named tuples, but it has a drawback: you need reflection to get something like
'tuple.Return("Name")' work fine. Discarded.
It was at this point when I thought, what about dynamic and DLR? I was figuring out how to do dynamic tuples and work with them.
Something like this is possible, but I want to discuss with you if you find that fine or not:
public static class DynamicTupleExtensions
{
public static dynamic Return(this IEnumerable source, Func returnPredicate)
{
return source.Select(result => returnPredicate(result)).Single();
}
public static IEnumerable> ToTuple(this IEnumerable source)
{
return source.Select(dynamicTuple => Tuple.Create((T1)dynamicTuple.GetType().GetProperties()[0].GetValue(dynamicTuple, null)));
}
public static IEnumerable Cast(this IEnumerable source)
{
return source.Select(dynamicTuple => JsonConvert.DeserializeObject(JsonConvert.SerializeObject(dynamicTuple)));
}
public static T Cast(this object source)
where T : class
{
return JsonConvert.DeserializeObject(JsonConvert.SerializeObject(source));
}
}
public class StronglyTypedResult
{
public int Value
{
get;
set;
}
}
public class Program
{
private static void Main(string[] args)
{
int result = AddOne(10).Return(value => value.Value);
Tuple resultAsTuple = AddOne(10).ToTuple().Single();
StronglyTypedResult typedResult = AddOne(10).Cast().Single();
Console.WriteLine(result);
Console.Read();
}
private static IEnumerable AddOne(int value)
{
return new List
{
new { Value = value + 1 }
};
}
}
Looking at the code, you'll see that I'm "abussing" of anonymous types and dynamic keyword in order to create an enumerable of
dynamic tuples.
Later, you can retrieve dynamic tuple value by using three approaches:
- Full dynamic-based tuple retrieval.
- Convert to a typed tuple.
- Cast as DTO (it's a conversion).
First one is the most controversial one: it's a full dynamic tuple consumed "as is". You lose compile-time type checking, but you
gain a lot of flexibility. I would use it in a project with high discipline and a strong naming and convention guidelines (this is heaven, I know it).
Second one it's a mix of compile-time and run-time type checking: the tuple is dinamically handled, but you get a strongly-typed
tuple. That's loosing the named tuple-like solution in the first approach. This approach would be for use in a method body, where
maybe there's no access to the classes representing DTO or you don't have them, or there's no implementation for them (same as first
option).
Finally, the third one is the safest solution: you use dynamic tuples in some layer or tier, and wherever you've classes representing
DTOs, you cast the dynamic tuple to DTO and this is working as actual and most of C# DTOs. This isn't a good option because I'm using
a JSON serializer in order to lie C# compiler and make an on-the-fly converstion from the anonymous type to the strongly-typed DTO.
At the end of the day, which is the question? The question is, what's your opinion about this solution?
It's maintainable?
What would be the performance impact of such approach?
Do you think that discipline is the only requirement to follow this approach or it's just a big fail?
Do you think that this solution is a big fail and it's a big crash in terms of creating well-designed solutions?
It's just software philosophy. And I'm pretty sure that this approaching is going to have defenders and haters.
Keep in mind that this isn't "the asker thinks that this is the best solution", but I'm just throwing here a conclusion I want
to share with everyone.
Thanks for reading and I hope you all are going to have good points (even if you hate my conclusions, it's about that!).
At the end (after one or 2 years) your early discussion have became true and more popular (check here) with the "Dynamic model binding with ASP.NET WEB API". The basic idea it's to avoid the amount of DTOs using dynamics and this way increasing the flexibility.
public Class Controller : APIController()
{
public dynamic Post(dynamic contract)
{
return contract;
}
}
It seems like you are looking for a combination of AutoMapper Dynamic Mapping and the ExpandoObject. With some creative reflection, you could probably accomplish your goal in this way.

Is it ok to use C# Property like this

One of my fellow developer has a code similar to the following snippet
class Data
{
public string Prop1
{
get
{
// return the value stored in the database via a query
}
set
{
// Save the data to local variable
}
}
public void SaveData()
{
// Write all the properties to a file
}
}
class Program
{
public void SaveData()
{
Data d = new Data();
// Fetch the information from database and fill the local variable
d.Prop1 = d.Prop1;
d.SaveData();
}
}
Here the Data class properties fetch the information from DB dynamically. When there is a need to save the Data to a file the developer creates an instance and fills the property using self assignment. Then finally calls a save. I tried arguing that the usage of property is not correct. But he is not convinced.
This are his points
There are nearly 20 such properties.
Fetching all the information is not required except for saving.
Instead of self assignment writing an utility method to fetch all will have same duplicate code in the properties.
Is this usage correct?
I don't think that another developer who will work with the same code will be happy to see :
d.Prop1 = d.Prop1;
Personally I would never do that.
Also it is not the best idea to use property to load data from DB.
I would have method which will load data from DB to local variable and then you can get that data using property. Also get/set logically must work with the same data. It is strange to use get for getting data from DB but to use set to work with local variable.
Properties should really be as lightweight as possible.
When other developers are using properties, they expect them to be intrinsic parts of the object (that is, already loaded and in memory).
The real issue here is that of symmetry - the property get and set should mirror each other, and they don't. This is against what most developers would normally expect.
Having the property load up from database is not recommended - normally one would populate the class via a specific method.
This is pretty terrible, imo.
Properties are supposed to be quick / easy to access; if there's really heavy stuff going on behind a property it should probably be a method instead.
Having two utterly different things going on behind the same property's getter and setter is very confusing. d.Prop1 = d.Prop1 looks like a meaningless self-assignment, not a "Load data from DB" call.
Even if you do have to load twenty different things from a database, doing it this way forces it to be twenty different DB trips; are you sure multiple properties can't be fetched in a single call? That would likely be much better, performance-wise.
"Correct" is often in the eye of the beholder. It also depends how far or how brilliant you want your design to be. I'd never go for the design you describe, it'll become a maintenance nightmare to have the CRUD actions on the POCOs.
Your main issue is the absense of separations of concerns. I.e., The data-object is also responsible for storing and retrieving (actions that need to be defined only once in the whole system). As a result, you end up with duplicated, bloated and unmaintainable code that may quickly become real slow (try a LINQ query with a join on the gettor).
A common scenario with databases is to use small entity classes that only contain the properties, nothing more. A DAO layer takes care of retrieving and filling these POCOs with data from the database and defined the CRUD actions only ones (through some generics). I'd suggest NHibernate for the ORM mapping. The basic principle explained here works with other ORM mappers too and is explained here.
The reasons, esp. nr 1, should be a main candidate for refactoring this into something more maintainable. Duplicated code and logic, when encountered, should be reconsidered strongly. If the gettor above is really getting the database data (I hope I misunderstand that), get rid of it as quickly as you can.
Overly simplified example of separations of concerns:
class Data
{
public string Prop1 {get; set;}
public string Prop2 {get; set;}
}
class Dao<T>
{
SaveEntity<T>(T data)
{
// use reflection for saving your properies (this is what any ORM does for you)
}
IList<T> GetAll<T>()
{
// use reflection to retrieve all data of this type (again, ORM does this for you)
}
}
// usage:
Dao<Data> myDao = new Dao<Data>();
List<Data> allData = myDao.GetAll();
// modify, query etc using Dao, lazy evaluation and caching is done by the ORM for performance
// but more importantly, this design keeps your code clean, readable and maintainable.
EDIT:
One question you should ask your co-worker: what happens if you have many Data (rows in database), or when a property is a result of a joined query (foreign key table). Have a look at Fluent NHibernate if you want a smooth transition from one situation (unmaintainable) to another (maintainable) that's easy enough to understand by anybody.
If I were you I would write a serialize / deserialize function, then provide properties as lightweight wrappers around the in-memory results.
Take a look at the ISerialization interface: http://msdn.microsoft.com/en-us/library/system.runtime.serialization.iserializable.aspx
This would be very hard to work with,
If you set the Prop1, and then get Prop1, you could end up with different results
eg:
//set Prop1 to "abc"
d.Prop1 = "abc";
//if the data source holds "xyz" for Prop1
string myString = d.Prop1;
//myString will equal "xyz"
reading the code without the comment you would expect mystring to equal "abc" not "xyz", this could be confusing.
This would make working with the properties very difficult and require a save every time you change a property for it to work.
As well as agreeing with what everyone else has said on this example, what happens if there are other fields in the Data class? i.e. Prop2, Prop3 etc, do they all go back to the database, each time they are accessed in order to "return the value stored in the database via a query". 10 properties would equal 10 database hits. Setting 10 properties, 10 writes to the database. That's not going to scale.
In my opinion, that's an awful design. Using a property getter to do some "magic" stuff makes the system awkward to maintain. If I would join your team, how should I know that magic behind those properties?
Create a separate method that is called as it behaves.

Handling collection properties in a class and NHibernate entities

I was wondering what is the recommended way to expose a collection within a class and if it is any different from the way of doing that same thing when working with NHibernate entities.
Let me explain... I never had a specific problem with my classes exposing collection properties like:
IList<SomeObjType> MyProperty { get; set; }
Having the setter as protected or private gives me some times a bit more control on how I want to handle the collection.
I recently came across this article by Davy Brion:
http://davybrion.com/blog/2009/10/stop-exposing-collections-already/
Davy, clearly recommends to have collections as IEnumerables instead of lets say Lists in order to disallow users of having the option to directly manipulate the contents of those collections. I can understand his point but I am not entirely convinced and by reading the comments on his post I am not the only one.
When it comes to NHibernate entities though, it makes much sense to hide the collections in the way he proposes especially when cascades are in place. I want to have complete control of an entity that is in session and its collections, and exposing AddXxx and RemoveXxx for collection properties makes much more sense to me.
The problem is how to do it?
If I have the entity's collections as IEnumerables I have no way of adding/removing elements to them without converting them to Lists by doing ToList() which makes a new list and therefore nothing can be persisted, or casting them to Lists which is a pain because of proxies and lazy loading.
The overall idea is to not allow an entity to be retrieved and have its collections manipulated (add.remove elements) directly but only through the methods I expose while honouring the cascades for collection persistence.
Your advice and ideas will be much appreciated.
How about...
private IList<string> _mappedProperty;
public IEnumerable<string> ExposedProperty
{
get { return _mappedProperty.AsEnumerable<string>(); }
}
public void Add(string value)
{
// Apply business rules, raise events, queue message, etc.
_mappedProperty.Add(value);
}
This solution is possible if you use NHibernate to map to the private field, ie. _mappedProperty. You can read more about how to do this in the access and naming strategies documentation here.
In fact, I prefer to map all my classes like this. Its better that the developer decides how to define the public interface of the class, not the ORM.
How about exposing them as ReadOnlyCollection?
IList<SomeObjType> _mappedProperty;
return new ReadOnlyCollection<SomeObjType> ExposedProperty
{
get
{
return new ReadOnlyCollection(_mappedProperty);
}
}
I am using NHibernate and I usually keep the collections as ISet and make the setter protected.
ISet<SomeObjType> MyProperty { get; protected set; }
I also provide the AddXxx and RemoveXxx for collection properties where they are required. This has worked quite satisfactorily for me most of the time. But I will say that there have been instances where it had made sense to allow client code add items to the collection directly.
Basically, what I have seen is if I follow the principle of "Tell, Don't Ask" in my client code, without worrying too much about enforcing rigid access constraints on my Domain Object properties, then I always end up with a good design.

Categories

Resources